doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/56851 (DOI)
Hello, I'm Ilya Zverev and I spent 12 years doing fun things in OpenStreetMap. Today I'm going to show you one thing not directly related to maps, but something useful I made last summer. I did it basically for City of Tallinn. Tallinn is the capital of Estonia, named Green Capital of Europe for 2023. It has lots of interesting things, one of them is free public transport. As in if you're registered in the city, you don't have to pay for buses, trams, even trains. Besides being free, it's also reliable, almost always comes on time, and routes cover the entire city and they're really great. This was obviously planned to move people from cars to public transport to reduce number of private cars. It didn't exactly work, because with each year we have more and more cars, but it's good enough that I sold my car and for past half a year I rely exclusively on public transport. The problem with it is that to use buses you need to know the routes, the schedules, the route numbers. Of course you can use printouts at the bus stop, but being technical people, SBIR, we better use some websites or apps. There is an official website, Transport Tallinn Air, that's advertised in buses, and for some reason it's advertised on mobile phones. It is not great on mobile phones, because its front page is 2MB and most of it is JavaScript. Not fun. But on the computer it's pretty useful, it can do anything you can think of, plotting routes, seeing buses on the map, watching arrivals for a stop, looking for news for closures and stuff. But when you need to know how to get home, you usually stand outside some shopping center, it's pretty cold and you won't wait for the website to load. You need some kind of a map. There are some options of transit apps, some big, like Google Maps, Yandex Maps, Traffi, City Mapper, they can do anything. The point of entry is a big interactive map with a lot of buttons. And you can plot a route by transit, by car, and the problem is, besides them collecting a lot of data about you, is that sometimes they offer to choose a taxi, because that's what they get money from. And it's weird. Like you're going green, you're using public transport, and then you see that your app tells you that there's a way to get there faster and easier, just use a taxi. Not great. And there are a lot of smaller apps that rely on open data published by transit agency. They are usually full of ads, or are paid, and they have very limited functionality, down to just having some tables, and that's all. I needed something better. I needed some apps that wouldn't cover me with information, but will just straight tell me what to do, where to go, what bus to take, and so on. And in the vein of open source, you know, if you want something done perfectly, you have to do it yourself. So last summer I set out to learn mobile development. Well, for multi-platform apps, there are not many options. Most of them rely on JavaScript, like React Native or Apache Cordova. And JavaScript is not the language you would choose to develop anything. So I look for alternatives, and there is one called Flutter. That's an UI library using Dart language. Dart language is a small, fun language like Go with the syntax of Java. It can be learned quite fast. These languages have been open source, and with pretty high adoption rate, they have thousands of open source libraries for virtually anything you can think of. So what I did is throw together a simple app that had a map for nearest stop and a list of variables. List of variables comes from our transit agency OpenData. It publishes not only GTFS, which is standard transit feed containing stops, routes, and schedules, and not only real-time locations of all the vehicles, but also arrival times for stops. Like you know, some stops have these screens that show when the next buses will arrive. Usually they use some real-time data to adjust these arrival times. So if your bus is stuck in traffic, then you'll see on the screen that it's late. The problem is very few stops have these screens. But the agency calculates these arrival times for every stop in the city. And that's what I use for my app. I just take nearest stop and print arrival times for that stop. And I was astonished how immensely useful that app was. It covered most of my needs in public transport. For example, when you are in the night waiting for your bus, you open app and see whether it will be in 5 minutes or in 25 minutes. And you have to look for alternatives. That was amazing. And weird that nobody else had thought of that. You know, to make a great app, you need to use it yourself. And I use public transport and still use it every day. So I know what I need. I needed to see arrivals and I needed minutes to plot a route. I used well-known services like OpenTrip Planner for plotting routes and Photon for jack-coding. Both of these are very easy installed. They are just Java jar files. I bought a server from Hatsnr for 3 years a month and that's all. It's really cheaper than CloudSolutions. So when you want to install some web service, think about renting a server from Hatsnr or from any other local VPS provider. It might be better than a CloudSolution. So fast forward a month later, my app is virtually done. And I really wanted to brag about the app. Because despite it being very simple inside, it has some design decisions that I'm proud of. And what enabled them is focusing. You know how some companies have mission statements. Like they direct what the company will do and won't do. They allow the company to focus. Having that for an app is also very useful. Because early on you decide what will be in your app and what your app won't do under any circumstances. It allows you to focus your efforts, to spend less time on your app and to polish your user interface. Your user interface in mobile apps is very, very important. So the focus of my app was here and now. No calendar choosers for a date. No moving a map to choose a starting spot. Albeit I did it at first. There was just two tempting. Just you're standing somewhere and you want to know how to get somewhere. And with that focus, the app was reduced to roughly six screens. That was enough. Looking for arrivals for nearest stop, searching for destination and looking for routes to that destination. But since it's the Deer Special Dev Room, I will focus on maps. Because in my app there are four different maps. And that's unlike all the others. You're attempting to make the interactive map the entry point to your app. Everything starts with a map, with a location on it. But when you add interactive maps, map to your website, to a desktop app, to your mobile application, you make it like ten times harder to use. Because very few people really understand and can work with maps. It's like the most complex user interface element. And you should think twice before adding it. It is really easy to add. There are many libraries like Leaflet, like the one Ivan presented, like these libraries for Flutter and Dart. You can throw them in the app and you have a map which can do anything. Zooming, spanning, adding markers, layers, everything. You save time when you add an interactive map. But you spend time of your users, because they will have to learn how to use it. They will have to do a lot of things they didn't intend to do, like zooming a map. I don't open an app on a phone to zoom it, I just need to get information. So I didn't want an interactive map, like the common one. And hence I got four different maps in the app. That's what I wanted to show you. There's one map on the main screen. It's basically for choosing a stop. You scroll it, stop changes, and you see it's labeled, you don't need to know the names of stops other than the one that's chosen. So the label doubles for showing selected stop. Zoom level is great, you don't have to choose it. All you have to do is just move the map. When you tap on a route, you see where it goes. There are labels, if you need, but basically it's again fixed zoom, which also shows buses that will come, but not that already left. That's important that buses that left, they don't distract you. You have to show as little information as possible. You have to show only important information. A lot of developers struggle with that. It's much easier to just dump all the information you have and make people use it. That really shows in the most complex screen there is. Route plan. How do apps usually show route plan? They split the screen in two parts, the map and the description. The map has the entire itinerary, so you have to zoom and look at different segments where you need to turn, where you need to wait. And also the second part is route description with lots of text, with tables, numbers, lines, top names, and you have to study it to know when to get off from a bus, for example. And there's constant switching between the list and the map to understand the route. It's really hard to use. I always struggled with making sense out of this interface. I need something simpler from my app, but how do I come with it? I had one constraint. One thing that nobody should ever do on a mobile phone is zooming. All the maps have to be zoomed at exactly required level. And if you don't move the map, that's even better. Static maps are the best maps. So what I did was splitting the route in parts, because at every point you're traveling through just one part, you're either walking or riding a bus or a tram. And the app tells it as it is. You have to walk some hundred meters, you have to board the bus that comes at this time, leave the bus at this exact point of time. And the maps, they just illustrate. Here's where you need to walk. Here is the tail of the bus route, so you need to know when to get off. And when you're actually there, it will show you GPS location. And on this transfer you have to wait for minutes and so on. So it's pretty useful. I think I never had to move any of the maps, although it's possible. But they just don't obstruct the list. And I'm really proud of this screen. So what it shows, you have to think twice before adding an interactive map. First thing, can you replace it with a static map? If not, what is the minimum interactivity you can afford to have? Because every extra bit of interactivity, every extra bit of information takes time from your users. And time of your users is very important. Unless, of course, your KPIs include time spent in the app. My KPIs is time not spent in the app, so average use time from the app is like two seconds. Most of the time I just open it, look at arrivals and close it again. So yeah, I published the app in September, did a few updates in then. It doesn't have a lot of users. Most new users were just last week when I went on our national television and shown the app and explained it. By the way, during the recording I saw a presenter for choosing a destination, tapping on the map and the app didn't support it, but that gave me an idea. So now when you tap on the map, it centers there and also zooms a bit. So with a couple of taps, you can set your destination location pretty easily. I really hope that you will steal my ideas for your apps and that maybe our transit agency will publish another version of their website or an app that will be exactly like my app so that using public transport will not be as hard as using taxes or private cars. Because in the end I just want to have less cars on the street. If you have any questions or want me to help you with the interface, write me. And have a nice day. See ya. Thank you. So hello everyone. I define myself through OpenStreetMap. I did a lot of things. It's my seventh time in Forzgen, I think. Last time I was talking about building a telegram bot for collecting hundreds of points of interest. And since then I moved to another city and found another challenge. And it was good as we have this user experience with public transport. You know exactly what you need to get all the information. And maybe that was a good example. How it is easier to develop an application when you are close to the user experience. And how is it, one question might be how hard it is to adapt your app to another city? Well, first it's really important that a city has open data. Because without open data none of this would be possible. And then for the app I use all the kinds of open data that the city government provides. Including GTFS, this timetables, real-time bus vacations. Some cities like Helsinki have this. How hard it will be to adapt? Well, if you dig into the code then not very hard. Because the most complex parts are done by OpenTrip planner, which supports everything out of the box. So it's basically you will have to code a bit to change how real-time locations and the rivals are processed. But generally I don't think that forking the app for the city should be hard. Okay, so let's see whether it will pop up for other cities as well. And did you talk with any of the officials about your app? Well, not yet, but when I went to the TV interview, the woman interviewing me after the program she promised to connect me with a person from Transport Authority to discuss this. So there is a real chance that something will be done. But I really hope that me promoting the app in this conference and other means like in newspapers and stuff will make somebody with more access, more, with better marketing to use the ideas for web interface. So it won't be my app that's popular in use by everyone. But the UI, the simplicity will be more widespread. So I won't have to do anything and maybe even stop paying for my server. Okay. And is the app accessible? Yeah, if by visibility we mean like for people with worse vision or for people who need eight hearing aids, it's really fun and it's a pleasure to make the app accessible, like to label every element, user interface element. And as you may understand, maps by definition are not accessible. So when you add an interactive map or even static map, you cannot describe it in words. You can interact with words or with limited like negation functions. So these interactive elements are something to avoid when you make an accessible app. And I should say that with my app, all the crucial functional elements are accessible. And you can see arrival times, you can share to them without looking at maps. And you can get outline of the route you need to take without looking at the maps. Like maps are there only for illustrative purposes. They are not for the center of the interface. And that I consider my greatest achievement with this app. Okay. And do you think the app is finished already or what would be next to be added? Yeah. I think it was finished in late November. But we can go and publish another release because when the app went really public, when it gained 1,000 users, then I got some really useful comments for improving. But overall, I don't think I made any noticeable change in past three or four months. It's a app that doesn't need any improvements. Okay. That's good feedback from the users, isn't it? Great. Yeah. Okay. So thanks, Ilya, for being here at FOSTA. It's always appreciated to have you here. And I think you shared all the information on the app.
In this talk I show the app that I've made in two months while learning the Flutter framework. It was built on open data and open source libraries. But what separates it from the rest is not code - but experience. Maps on mobile screens are hard to do properly, and there's always a temptation to add another screen, another button. Turns out you need a bit more than skills to make a perfect app: at least you need to use it daily. Last year I moved to Tallinn, Estonia, which is known also by it's public transport system - free for city residents. It is so good, reliable and optimized, that I sold my car and use buses and trams exclusively for all my city travels. While planning a transit route is pretty straightforward with an official website, usually I need some transit-related information on the go. On a phone. And that's where both the website and many big and small apps show their imperfections. Download sizes, ads, user interface. Instead of focusing on my problems, I have to learn and work around apps' issues. As an open source developer, that seemed not an issue - but a challenge. I've made interactive maps for the past ten years, and I know one should not use these libraries with default settings. Nor one should ever rely on zooming. Could I made a map with apps that doesn't require more than a couple of taps to get all the needed information? How hard would it be to keep number of buttons and other interactive elements to minimum, while not sacrificing functions I need every day?
10.5446/56854 (DOI)
Our next guest says he's a recovering C developer in the embedded space. So he's going to talk to us about why he now uses Go in the embedded Linux space. Welcome to this talk. I'm going to talk about Go. The premise is that I want to convince you that you should use Go for your projects, for your next software project, for your existing software projects. My name is Zygmunt Krynicki and these are the Confessions of a recovering C developer. Before we begin, a little bit about me. I'm about 40 years old. I've been programming for about 30 years, including 20 years professionally. I've been using C for most of my life, then Python, and then for the last six years, I've been using Go. You may know me from my previous work, what I've been doing, some public open source work. Currently, I'm a principal technologist at Open Source Technology Center at Huawei. In my spare time, I really enjoy retro-programming because it lets me get these insights into technology I used to play with as a kid. So let's get to it. I've split the presentation into several chapters. I'm going to talk about the world that we are in right now as software developers working with embedded things. I'm going to focus sometime on Go tooling, about the tools you will experience by using Go, and about why I think these are really important. Then I'm going to give you a quick overview of the language. We don't really have time to get into the nitty-gritty details of everything that is interesting, both good and bad, about Go, but I'm going to try to do my best to convey the general feeling. Then I would like to spend a little bit more on one particular feature, which are Go interfaces, which I think are just fantastic and they are great improvements to virtually everything I do as a programmer. Then I have this small section about just the highlights of cool and interesting small features to the language that just make everything, again, a little bit better than if you didn't have them. And then we can open up for questions. So that's the agenda. I'm going to switch to a different, okay, I'm going to switch to a different mode. Okay, so part one, the world. So my premise is simple. The world has changed and will keep on changing. And just as we move from small microcontrollers to effectively small computers or really small servers, the complexity of the software running on top of these machines has changed as well, has matched the capacity of the new hardware. And in this complex world of software, maintenance and security have actually become more and more important than ever before. There's interesting new legislation around software updates. There's interesting new legislation around maintenance requirements on software vendors. And I think this will only be continued as a trend where we can no longer think of things we make as temporary toys, but as long term support products that are actually important. If they fail, the failure is meaningful, not even as an inconvenience, but more and more so as actual danger. And we have to think about the tools we use to develop the software, because tools can vary and they matter a lot for the quality of the things we can produce. This is why I want to talk about Go tooling. I believe Go tools are a joy to use. They are well done, they are comprehensive, they reach to all the relevant parts of engineering. They are standardized, they have not, unlike some of the other technologies, they have not spread around as a zoo or third party proposals that just keep reinventing the same wheel. Go is very cohesive in the sense that it provides a lot out of the box, providing a very rich fertile ground for building applications, but are then filling the missing gaps. I would like to cover a couple of things that I think are just very important, especially from the point of view of writing embedded software projects. The first thing is that Go is extremely easy to build with. If you have been using any kind of embedded programming environments, weird cross compilers, very complex set of tools, things like BitBake or all the Yachto ecosystem, it's great that we have it, but it's really fantastic that we can just code without touching that until the very last moment when you need to just do the packaging. Go is extremely easy to build with. Go build handles your dependency, handles your cross compilation, handles your configuration, and it's such a powerful combination when you think that effectively by installing Go itself, you have virtually all the tools you need to create programs for any platform. Ironically, plain make is actually still common in Go projects, so that you can still install your programs in the desired locations, so that you may copy additional files that you would like to have. But even that is not necessary. The fact that you can effectively just use the single built-in Go build command to pretty much do everything required for a complex project is a fantastic refresher coming from C and the plethora of complex tooling that has grown around it over the years. And if you're talking about building, the actual builds are very efficient. They're very fast. This is done by some clever design on the language and some clever implementations in the compiler. And this translates to very fast iteration. You can just build, test, and build or run your program in seconds, even for programs of considerable size. This is extremely useful. When I'm building my applications, I'm constantly writing tests. I'm constantly running them. I'm constantly trying things out. And if I were to think back to how I was doing things before, when the iteration was measured typically in minutes, it's just a completely different experience, especially when some of the other things integrate well with your environment. We're going to discuss those as well. So once you've built your program, it's actually very easy to share. Go has a quite interesting method for handling dependencies and handling publishing. So with Go install, you can literally install anything that is publicly available. And unlike some of the other technologies, unlike some of the other languages, this is a highly portable solution. So it's very likely that you will be able to install something on Linux, Windows, and macOS alike. Obviously, there are exceptions. There are things that are platform-specific. But it's very easy to create portable codes in Go. It's very easy to create portable applications and libraries, like portable in practice that things will just work fine and that people actually support other platforms. And sometimes you don't care about this, but it's really convenient when you have to. And the builds that come out of Go are most likely static binaries. There are some exceptions, but it's very easy to just create a single binary that is self-sufficient, stand alone, and can actually be easily shared. And this translates to very easy deployment as well. So because of static linking, you don't have a runtime or interpreter or a virtual machine that you have to depend on and provide as well. It's usually just one file, so it's extremely trivial. You don't have to think about where it's going to be located, how it's going to find its data. Go has a very interesting new mechanism to embed additional file systems within the binary itself. So even though you may require a large number of data files, for instance, you're building a web application that has a lot of templates, style sheets, and other things like that, you can actually embed all of that into your binary. So it's still going back to, it's just one binary you can copy, and this binary just works. And because of static linking, static linking is actually not linking to the C library of your platform unless you use Cgo, which we'll discuss in a moment. So this means that you don't have this problem that sometimes you encounter in programming where your code works on GLEB-C, but doesn't quite work on Muscle or other libraries, because that's just not a problem at all. You have a very predictable runtime behavior this way. And this is really great for both servers and containers and embedded programs, because effectively what you coded, what you tested, is going to behave exactly the same way on other environments, and it's extremely easy to prove so by just running it there. So cross-conflation is just a join-go. Forget setting up your cross-compiler, forget custom vendor patches, forget something that is broken, forget something that links to your host libraries by mistake, forget doing it in 20 different ways, depending on the subset of technologies that you may be very familiar with, C. It's just brilliant. You just set a go OS and go arch variables, and you can just go build whatever you want for whatever platform you want from whichever platform you want. This actually works regardless if you're running Linux or Windows or macOS, regardless of the CPU architecture, it goes very portable in this sense. You can also compile your projects for Android, iOS, and WebAssembly, although I personally have no experience with that. And unless you use C.go, which is effectively the bridge to use C code within Go programs, you don't need the external cross-compiler, so go is really all you need for a good chunk of the projects I've been working with. So it's really that simple, two variables. And because the binary that you get out is static, you can just copy it and run it directly on your target. It's very convenient. I don't have a bullet point for this year, but I just want to mention it. Go doesn't have a preprocessor, and yet without a preprocessor, it manages to be extremely portable and offers a way to create conditional compilation by effectively selecting specific files to compile out of the directory. Using a convenient mechanism either by convention or by declaration. And this means that the standard library is extremely portable and it has implementations for all the relevant platforms, but this translates all the way to end-user programs and third-party libraries. It's very easy to create portable code in Go, mostly because the standard library wraps around the rough edges that just the platform differences are usually something you have to care about manually, so this goes away, but then if there's something that you have to do that is platform-specific, there's a very simple and convenient way to express that as well. So this means that the entire package stays portable. So going to dependencies, Go is actually pretty brilliant on handling dependencies. This is something that has been added to the language since its inception, but now it's well-established. So there's two things I need to explain. A package is just something you can import into your, in a file that you're working on, and this is like something like an include statement, but if you want to actually depend on something that you have to bring in the whole module. So a module is a set of packages. With your other dependency to your program, you add the dependency on the module. And the place to get the module is actually arbitrary. Go does not have a central naming registry like some of the newer languages have done with. You can put your code literally anywhere you like. There is a convenience central place that just indexes publicly available modules, but the path is actually the URL, so it's very easy to publish programs, either branded with your Git forge or with your company name. It's super convenient and you don't have to have, there's no problem of name squatting. You don't have to think about reserving names upfront or something like that. It's very practical. There's a lot more that I could say about dependency management in Go. There's some very interesting things coming in 118, which is Go 1.18, the upcoming next release, that I think will be actually very meaningful to Linux distributions, because there will be a way to kind of do what distributions try to do by forcing the single library across all the programs using that library in a scalable way. But my point is that there's a lot more depth to it than that, but we don't have time to go into it. The overall aspect is that it's just well done. It works very well in practice and I think you will enjoy using it. And again, there's a lot more than I have the time to cover, but I'm just going to mention some quick highlights here. So there's a standardized way to test benchmark and fuzz test your programs, since Go 18 of these fuzz and everything else was available before that. So this means that you can expect very standard behavior from your own code, from third-party code, from standard library code, on how you actually want to measure correctness. There's great tooling around that for all kinds of interesting things like coverage, reports, and fuzzing is in particular very interesting, because I think it's the first mainstream language that has it built in, so I think it will be a very interesting new development for library authors to harden them in libraries even more by effectively adopting this technology to find bugs that may be lurking beneath the surface. Another very important thing in practice is that there's a standard way to reformat all the code in Go. So this means that effectively no matter which project you look at, it has a very standardized look and feel, both in the simple things about how the code is laid out, but this actually goes deeper, obviously not automatically, but there's a standard naming scheme for things in Go, and I've seen pretty much all the projects that I've interacted with adopt that. And this means that just Go is just super uniform, no matter where you look. It's not difficult to get into new code, it's not difficult to learn how other people wrote something, it's not difficult to explore the standard library, it's extremely readable. Unlike what you find and see with their multiple prevailing styles, some of them are, at least personally I find them kind of ugly and difficult to follow, so I think this is a great simplification to making the entire language platform accessible to people. And lastly, on tooling, there's an official language server, so if you're not familiar with what that is, it is a modern way to handle programming languages within an IDE or a code editor. So unlike the previous approach, that the editor would have to figure out, oh, this is a language like that, this is how you parse it, this is how you can apply some syntax coloring to it, and maybe I have a little bit of understanding of what it really means beyond the superficial shallow syntax. Language service is effectively a part of the compiler running as a service within your system that looks at the project you're developing, keeps its internal memory structures up to date, and offers services for the IDE, so no matter which editor you prefer, if it supports language service, and pretty much all the commonly used things do, you can have a great coding experience by having deep language understanding, by having deep reformatting, sorry, non-reformatting, refactoring features, like symbol renaming across everything, like moving things around in a way that preserves the semantics, it's very convenient, and as the project grows in complexity, I think it's just fantastic to have this available in such a universal way, and coming from the people who make the language directly. So that's the tooling side. Okay, interfaces, my single most precious feature in Go, and if I could actually get interfaces in C without anything else, that would be, I think, the single biggest improvement to the language it ever had. Anyway, let's get into it. So interfaces are types, just like you can have a value of a type like int, you can have a value of a type which is an interface, and interface is literally a set of methods, just the signatures of methods, the set of method signatures. So when you have a value of an interface type, you can effectively call one of these methods and nothing else. And unlike in some of the other languages, any type that has this set of methods actually is set to implement this interface. So you don't have to name this upfront, and you don't have to, like it's not explicit, you don't have to explicitly code this in your program, and you don't have to actually use a specific name. So the name of the actual interface is just not important. It means you can have multiple interfaces which are effectively the same interface, and not introduce arbitrary dependency between part of the program, just because they happen to need to pass the kind of same-looking things. And interface values are actually small pieces of memory that you put on stack. They're really pairs of words. One describing the type and one describing the value, typically a pointer if it's something larger or maybe the value is actually baked into the interface value itself for things like that fit in the word, which translates to efficiency. So it's really efficient to call methods through an interface. It's roughly the same as C++ virtual method calls, and it's prevailing in the language. So let's look at a quick example. If you are not familiar with Go syntax, it says there's a type called greeter, and the type happens to be an interface, and this interface has one requirement. There is a hello method which returns a string. Obviously you can have more methods in an interface, but it's very typical to actually have just one. So again, any type which implements the hello method which returns a string implements the greeter interface without declaring it. So there's no equivalent for like implements in Java. And the name of the interface itself doesn't matter. So if there's an interface called helloer with the same method, it's the same interface. Practice code will accept it identically. And this really decouples things around everything in Go. I will just do one example here. The standard library has the IOPackage with a writer interface, which effectively says here are some bytes, write them, and returns the number of bytes you got, and then there, in case something failed. And another package in standard library, the printf, fprintf function from the format package, takes a writer, a format string, and an arbitrary collection of values, and does what fprintf and c would do. But the beauty in it is that you can provide arbitrary writers. So you can provide a file or a pipe or something like a network socket or something elaborate, something that logs to screen, something that paints it or prints it. And it's extremely practical not to duplicate the logic of formatting. By just providing arbitrary interface for writing. And there's lots of interfaces within the standard library for IOPoR, for a lot of other things. And it's really convenient to use in practice, and it's type safe and it's quite performant. And if you think about the fact that interfaces are just sets of methods, then the question arises, okay, what about the empty set? Well, the empty set is the, I think, void star equivalent of Go. It's fully type safe, and there's a lot of practical uses for it. You can pass any value through the empty interface. You can recover the actual type in a type safe way. So you can say, okay, I got an interface, but I was really expecting a couple of different things. I mean, I got a string, so this is the case for a string. Maybe I got an int, maybe I got something else, and you can do it in a type safe way. You can also use the reflection package to actually look at the thing you've got and explore it at depth. You can just do arbitrary operations on it through the reflection package. So it's very practical, it's, again, very prevalent in Go. And some of the things that it is doing are going to be improved with the addition of generics in Go 1.18, because some things that used to accept an interface would be complemented by making that interface something that is declared at compile time. So you can think of generic collections, you can think of a lot of examples where you can parameterize something by type. And currently that is done with the empty interface. But this is a very powerful method of providing this dynamic aspect to the language. Like even the printf function we looked at before, it takes a collection of values to format, and all of them are provided as an interface, and the logic inside the format package understands how to format different things, depending on their type. It can offer, like the types, the values that you pass, can offer special methods that integrate with the format package so that you can print something custom in a custom way. This provides a lot of flexibility, but again, in a very type safe way. So this finishes the key highlight I wanted to have around interfaces, because I think they are the most fundamental thing that improves programming at scale, making it modular, making it testable by making replaceable pieces, interchangeable pieces and loosely coupled pieces. But now I would like to spend a little bit on just highlighting some smaller features to the language that you don't have to be as large and complicated but provide tremendous value. So reflection, we mentioned this interface. Reflection effectively allows you to look at anything that you've been given through an interface. So you can look at the value, you can look at the type. There's a whole array of methods to look at specific parts of everything possible and go, like reading an array from a given index, writing to a map, working with channels, working with structures, calling functions, all of that, and that's obviously type safe and extremely convenient. And the standard library uses reflection in several places. Like for instance, there's a package for marshalling values as JSON, and you can provide it anything and it will provide correct JSON output, and it's implemented using reflection, so it doesn't require any coding on your part. Let's just go and look at the type at runtime and just doing that in a, still in a statically compiled, statically typed language. So, struct tags, I mentioned this earlier, and it goes hand-in-hand with reflection. So reflection lets you read struct tags, and struct tags are just strings associated with distinct fields of a structure. So here we have a simple example that type is a person, sorry, there's a type called person that has a structure, it has two fields, Nick and Away, one is string, one is rule, and there's some metadata for how you would like to represent this as JSON. For instance, Away field should not be mentioned if it's empty, or if it has the zero value, it has the default value, which is false for Boolean. And you can read struct tags through reflection, there's a me language inside, and this is all acceptable through the API, and the JSON package I've mentioned has a Marshall function, which actually uses this. So we're providing an instance of a person, and it's going to look at that instance, it's going to look at, okay, it's a structure, so I know what I need to do, it converts to a JSON object, it's going to look at the fields it has, it's going to use those structure tags to get the hints about what it's supposed to do, and this is extremely practical. I've been using this for JSON, for YAML, for debuts, for I and I files, it's extensible, you can code your own library which uses this, it's very practical, it's very easy to create something which is declarative and conveys the message, like doing API reviews using those structures with struct tags, it's very convenient. And again, it's a small thing, but I think a small gem. There's the zero values, so this is so simple, no uninitialized memory in Go, Go does not have uninitialized memory. Everything has a default value of zero for the type, and a lot of these, there's a lot of convention around saying that your type should behave in a meaningful way as a zero value, so it should not require additional initialization. One example is like the standard library of buffer, like a string buffer or byte buffer. When you put it on stack, you just declare a variable of this type, it's already a correct buffer, you can start using it as a buffer, you can start writing to it, you can maybe start reading from it, although it's empty, it doesn't require additional steps, and this just makes everything a lot easier. There's no uninitialized memory and things behave correctly without explicit additional steps when they are not required. I mentioned symbol visibility earlier, and if you read the slide and you think, what is this, this is kind of crazy, then it may be a valid reaction for something as radical as this, but capitalization defines visibility of symbols and package, so everything that is capitalized is a public symbol, it's part of your public API, you shouldn't break it, you should document it, and everything that is not capitalized is a part of the internal implementation that is not accessible from outside of the package, and the compiler and the linker ensure that nothing like that happens, and you can freely refactor those, you can freely change those, you can actually, while writing, you're painfully aware of the subset of your API which is public, you have to pay more attention to. I think this is so important in practice because unlike in many other languages, I found a lot more attention to what the public API looks like, and a lot more attention to stability, because of this mechanism, just makes it very in your face as a director approach to handling this problem. Go has another small gem which are called deferred calls, and there's a quick example here, we open a file, we check if you managed to open it, if there's an error, and then we defer close it, we call defer, we see defer and then F close, so it really means that no matter what happens within this function, how we return, there could be multiple return paths out of the function, we will always close the file, and that's really it, it's coupling the acquisition and the allocation or release of the resource in the code, by just close proximity, it's easy to read a function from top to bottom and see if it's missing something, by just looking, okay, you're getting something here, you're checking for your problems, you're checking for your releasing resource later, that looks good, carry on. It's much easier than the equivalent that you often adopt and see where you have to have a lot more duplication of the cleanup code or specially handcrafted cleanup tables with go-to and specific loops or by using non-standard extensions for the attribute cleanup. This is a little bit more generic, you can put a lot more than just a function here, you can literally code, you can put a whole function that you would like to contain in your cleanup logic and that's super convenient in practice, just because it's again the proximity of the acquisition and the release of the resource. So another small gem that has been added to the language recently is the embed package, it's embedding files within your program. So within your library or within your program, whatever you're writing, you can say that you would like to embed some files from the file system into the binary and you can choose to embed the MyDrizzbytes as a string or as the entire file system. So in this example you can see how a hypothetical application would embed some images, some templates and an index file for a web application. And you would get something which looks like a file system, this is an interface, it looks like a file system, it has an open method for opening files, you can read directories, you can read specific files. And this is just so convenient because again going back to the whole deployment story, this means that you can now embed pretty much all of your data inside the same binary. So just updating one binary updates everything in an atomic cohesive step and I think it's just such a small but fantastic feature, I just needed to highlight it. So what is my message? I would like you to consider Go for your projects. Go is extremely practical, easy, safe and scalable, it has a fantastic standard library, it's got great tooling and a nice ecosystem of third-party libraries and programs around it. So that's my message, get started with Go, if you want to learn about Go there's a fantastic web resources that are provided, go.dev, everything is in one place, really give it a go. Thank you, if you have questions now I will cut the video and we should do some live questions. If you would like to talk to me, I'm available on IRC and Twitter, you can find my code in some of the places as well and thank you for your time, thank you for listening. That's it, bye-bye. Thank you Zygmunt, you really convinced me into looking into Go in the embedded space. So to the audience, please refrain from any fights about where to use Bazel or not to use Bazel, please take those to the Java Dev Room over there, thank you. I have questions and if you still have questions you can still dive them here below. Well, it's kind of talk about sizes, have you seen any limitations about the Go memory management for the embedded space? So if you think about the embedded, the classic embedded like the tiny Go where currently is exploring, I have not looked into that much. I was mostly looking into virtually miniature servers that are running Linux and have way more memory than you would probably dream of on the desktop. So there I did have some issues with memory and mostly about ambient memory pressure where you just have, you know, a service which just takes, you know, a fixed amount of memory even though it's not doing anything. So you have to be mindful of what you actually keep in memory on top of what the language runtime requires you to keep in memory. And there are a couple of approaches on how you can do this. You can maybe split your binaries into smaller chunks, paying the tax of additional binaries, but then having the option to load, you know, just the right binary at the right time and as it executes it will, you know, you will use the memory but then the memory goes away. Or by just debugging your application, maybe there's something not technically leaking but not doing the thing you expected. So there's a range of tools where you can explore the contents of the memory at runtime of a running process through a, there's something you can easily include into any Go binary and then look at it remotely. So this can roughly what is inside the process at runtime. And there are similar tools which you can take a look at the binary size. So, okay, why is this binary such large? So for the memory, I have not personally run into something that would be critical, but I did run into a project where effective was trying to slim it down to such a small device that in my opinion it was no more practical to go with Go because effectively you were building on a device which would be obsolete. In my point of view. I'm going to stop here because that time is over. There is a breakout room for everyone who has additional questions. Go there, follow the link. Thank you.
The Go ecosystem has established itself as the language of choice for many cloud and server software. We present arguments that Go is also uniquely suited to thrive for Linux-based embedded development. We present our case in the context of the SysOTA and NetOTA projects.
10.5446/56855 (DOI)
Hi everyone, welcome to my first then 2022 talk, Z80, The Last Secrets. So first, who am I? My name is Anis Astier, I've been a Linux engineer for many years now and worked in embedded and consumer electronics. I like to take my mind off work by working on side projects that are not directly related. For instance, I made a presentation two years ago in the Go Dev Room, an embedded system that was designed that was a toy for kids. I discovered Z80 while writing an emulator on another side project. This is not the subject of this talk, but I wanted to share a few discoveries I made along the way. So the Z80 chip was first released in 1976 by Zilog. It's a CPU designed as an extension of Intel 8080, which was already a bit popular in the 70s. So it's a Sysc design, a complex instruction set, and maybe its name was a bit of a forecast, but it picked in the 80s, I'd say. But Zilog still exists and sells this chip, the CPU. There are a few Z80 machines, here are a few. I won't be presenting them all, it would need a talk by itself, but the most popular I'd say would be the ZX Spectrum. It has a very large community. We have here as well the TRS80, a sharp pocket computer, TI80 series, TI80 calculator, here we have a TI81. It was not only in computers, but as well embedded systems like game consoles. We have the master system and the game gear, which is basically a master system that's in portable form, to simplify. Z80 had many clones. For some reason, many vendors decided to clone it as early as the late 70s, since it was a very good design. So the original, as I said, was from Zilog. There were clones from Mustek, Sharp, Neck, Toshiba, Goldstar, which is now known as LG, ST Microelectronics, and even Soviet clones like the T34BM1. This CPU has, it's an 8-bit CPU, but it has register pairs and can do a few 16-bit operations. So its main register is A, the A accumulator, and it has flags as well. So AF are the main registers that we'll interest us during this talk. And it has other register pairs like BC, DE, HL, which were also, I think, on the ATAT, as well as an alternate register set that can be used for context switch or function call or things like that. So it has doubles the register space at least after the ATAT. Its instructions, as I said, it's a complex instruction set, so instructions will often do memory load and store as well as the operation. It can do 16-bit operations, as I said, like additions, subtractions, or operate on 16-bit addresses. Its instructions are variable length from 1 to 4 bytes and can have, depending on the instructions, from 1 to 3 operands. It also has very interesting memory block instructions that can be used to do memcopy or memcompare in very little number of instructions. For instance, it can do memcopy in one instruction, except for the setup, and the instruction can repeat itself without advancing BC if a few conditions are met. What else? We have the flags here. We'll zoom into the flags. As I said, it's an 8-bit register in the AF register pair. Let's go into what flags it has. It has a carry flag, which is very standard for additions, subtractions, parity on overflow. It has an N flag, which is a bit weird, but it's a flag bit that was reserved for DAA, which is an instruction dedicated to binary coded decimal operations. I won't go into detail. The half carry flag is the same. It's dedicated to these kind of operations. We have the 0 sign flag and two other bits, 5 and 3, which are documented as not used. This is an extract from the official dialogue CPU manual. So, they are not used. Okay. Except we know we've known for a long time that even though they are not used, we can access the flag register, and they have a value, which is not documented, at least not officially. The most well-known way these flag register bits are modified is through the 5, 3 copy. It's a copy from result. If we have an instruction that operates on the accumulator most of the time, bit 5 and 3 of the accumulator will be transferred to bit 5 and 3 of the flag register after the instruction. Here's a sample from undocumented Z80 documented, which is a very useful document for emulator authors, which documents behavior of the Z80, which you can't find in the CPU manual. We have this instruction called bit NHL that does not modify bit 5 and 3. Here, called Y and X. I think the term comes from name. I'm not sure exactly where it comes from, but let's say it comes from the main emulator. For this instruction, bit NHL, undocumented Z80 documented, it says it comes from some sort of internal register. This register is related to 16-bit editions, and most instructions do not change this register. This register was already documented as early as 1981. Here, we have programming the Z80, the third edition, but I think it's also in the previous editions. It's documented as W and Z. Here, it says that we have two special registers available to the control unit within the Z80, but not to the programmer. It's already known that inside this CPU, there is a temporary area that is used to do a few calculations that is not exposed. In the NoCache ZX emulator, which is a property emulator from the NoCache family, it was very popular in the 90s, even in the 2000s by Martin Koff. He is the author of NoCache ZX, Martin. He wrote this documentation, and he called this internal register MMPTR. He documented a few of the behavior of this internal register, but not all. That will be important later. In particular, the memory block instructions, which I've talked earlier about, they were not tested and not documented. We go back to this internal register. I want to present this ZXL. If you've written an emulator, you probably already know what this is. This is Z80 instruction exerciser, written in 1994 by Frank Ringel. It's designed for the ZX spectrum, I think. It works by generating a binary with a lot of instructions, mainly with macros. It feeds the result after each set of instructions that is written. It feeds the CPU state, all the registers, it feeds them into a CRC, and then it continues the test. Once it has finished a category of instruction, it will print the CRC. In order to pass, you must have the same CRC as the Z80 hardware. It varies too for emulators, except there's a catch. First, let's sum up what we have here, the mystery. We have this internal register called WZ, or MMPTR. It's an internal register. It's 16 bits. It leaks through the bit and HL instruction. This instruction, I haven't talked about it yet, but it's basically a bit test instruction. It will load at address HL. HL is a register pair. It will load a single byte, and it will test the nth bit of this byte and put the result in the flags. Through this instruction, we have two bits from MMPTR that are leaking. We know a bit instructions how which instruction affects this register, but not all. We don't know the exact behavior. We have a few missing things. This led to something a bit weird, which is that in 2006, at least up until 2006, no emulator passes the XOL, the instruction exerciser. There are missing the bit and HL instruction, because a few uses of this MMPTR aren't known. In order to crack it on the Russian forum on ZX.PK.ru, a few hackers, mainly Boo Boo and Vladimir Kladov, with the help from many other members in the forum, started reverse engineering it. They summarized all what they knew, which are pointed earlier. They had a breakthrough when reverse engineering it, when after a few hypotheses, first hypothesis, which is that index instruction, the specific type of instruction, which is index registers, the index instruction, which is index addressing, they always initialize MMPTR the same way. This way, we have a way to always set it to a single value, let's say zero, for instance. This led to a discovery that is another memory block instruction called CPI. It will increment this 16-bit register, MMPTR, whereas that CPG will decrement it. That was the breakthrough. But why? Why is that a breakthrough? Because I'll tell you why. We now have, we know how to modify this register, and we can use this to leak the content of this register. The goal is to dump it. We will use the fact that CPG decrements the register in order to dump the lowest 14 bits. I'll give you an example. Here we are, I have shown the 16-bit of MMPTR. Its initial value is three, because its lower bits are one-one. So it will be decremented with CPG. It will go to two, decremented once, twice, once, zero, the third time. If we decrement it a fourth time, we can see that the two bits that are leaking, a bit five and three of the upper byte, they go from zero, zero to one, one. We can use this algorithm. I'll let you think about it. If we go exactly from zero and zero to one, one, if we do exactly this transition, then we know that the number of times we decremented it is actually the initial value of the lower 14 bits of MMPTR. Because we can't, well, with this method, we can't leak the upper two bits. But that's not important. We already have enough information to run all the instructions, and that's exactly what Boo Boo and Flagy McLeodov did. They run every instruction on the Z80. And after each instruction, run a dump routine. And this routine allowed them to document the remaining behavior of MMPTR. I won't go into the detail of what instruction modifies it, but they produce this document called MMPTR.ang.txt. If you've written an emulator as a Z80 emulator in the past 15 years or modified one, you probably already know about this. But that's quite interesting to think about. So what use is this? Because most, let's go into this. One example use would be to do a CPU ID. We know from the result of the previous experiments that clones do have MMPTR, and they did reverse engineer its behavior, even if they didn't produce a documentation. Except we have a small variation on MMPTR behavior. For instance, the BM1, the Soviet clones, their effect is documented. So this can be used, for instance, for doing a CPU identification, a CPU variant identification. Another example use with ZXVeGAR totology found that this device, which was ZX Spectrum redesigned as a black box and released in 2015, it in fact used the Fuse emulator version right before it implemented MMPTR. So it probably had GPI violations here. So is this the end? Most emulators catch up and they implement very accurately the ZAT, even if it's a very arcane behavior, which should affect most known software. Everything is reliable, everything is accurate, I guess it's success. Well, no. There was this other thing that was discovered in 2012 by Patrick Rack, here is a screenshot from the World of Spectrum forum, and he summarizes the situation quite well. After finally having nailed MMPTR a few years ago, just because of BitNHL, it now seems there is yet another internal register which actually affects at least the outcome of SCF and CCF. That's quite disheartening as it means that no ZAT emulation is out there is actually accurate again. So what are SCF and CCF? The set carry flag and change carry flag. If you remember, I talked about 5.3 copy, which is the most well-known way the X or unknown 5.3 bits of the flag register will be modified. Well, in case of SCF and CCF, it's not systematic. It may be an or instead depending on some internal states. This state, it's supposed to be temporary zone where the flags are assembled, the F register, and SCF and CCF will only copy the bit 5.3 from the result. If flags have changed during the previous instruction. Otherwise, it will or with this internal register Q, which is the previous F value. So it means that in order to emulate Q, you only need to know if F changed at the previous instruction and then you can do an or with the current F instead of copy. And in order to test this and test this behavior, Patrick wrote a test suit called ZAT test. It's in my opinion the gold standard in ZAT instruction exercises. It supports both testing of MPTR and Q. It also found that Q is only on the Xilox version, not next. Next chips. I said my talk would be about my emulator. It won't be, but it's called gears. It's written in Rust. It passes the ZAT test and the fuse test suit from the fuse emulator and available at this URL. So are those really the last secrets of the ZAT? In my opinion, probably not. The behavior of all clones is not 100% known. I've talked for instance about behavior of Neck, which was found by Patrick as well as the Soviet clones. But as I said earlier, there are many other clones. And when implementing those clones, the hardware engineer probably also had a lot of knowledge that might have been lost to the past. And with this, this concludes my talk. Do not hesitate if you have any questions to go into chat right now and we'll answer all the questions. Here are the references and the citations for the images. Well, thank you very much. Have a nice first day and have a good day. I was always just asking you, because you're a FOSDM, I want to say veteran, right? You've been to FOSDM a lot. So you had stuff to do with the go room for a couple of times. Is that correct? Yeah, I did a talk for you to go to the room on an embedded project. I've been to FOSDM since I think 2009. So I'm a veteran, but not one of the oldest. I know there are people in the audience that have been more than me. So yeah. Yeah, all right. That's fair enough. All right. So are you ready for some questions? Yeah, it's cool. All right. So the widgets thing, where the questions appear, you can see it as well, I think it doesn't show the questions in the order they were asked in. So I apologize in advance if we ask the questions out of order. But there's a question nearly 50 years later, people are still finding secrets. How does this happen? Like incidents on determined program behavior? And there's also a follow up question. But let's do this one first. Okay. That's actually a great question because I'm not sure people are still finding secrets, but people maybe are finding them again. As I said in my conclusion, some secrets were found. ZZ80 has been reverse engineered before for the clones. And they were found and then forgotten again. So what happens is that the publicly available documentation on those secrets, it's that that is that takes time to go to come out. And sometimes it needs reverse engineering it again. All right. All right. That's because the follow up question is interesting. So when do you know when everything has been uncovered? Right. And because you would think that because ZZ80 is not something that was invented yesterday, it's been around for I don't remember you said in your talk 50 years. Yes, the first. So yeah, when do we know that it's 100% and yeah. And that's a good question. And I'll go back into the context of emulator development. Many emulators think that what's in this talk is relatively futile. Why? Because let me let me go into the detail because MPTR as far as I know, no software except maybe a few test suits depends on it. And they can't depend on it because it's not documented. So if the purpose of emulation is to run old software, then you don't need to have an exact match for the hardware as long as all written software, you know, already runs. So when does this game end? I would say the game ends when all written software before before the emulator was written runs properly. Let's give this limit. Now, if you want to talk about historical and archive and the pursuit of knowledge, of course, it never ends. Yeah, okay. But so, but so, okay, this is how do I say this in a good way? Because, you know, Z80, it's different from, let's say, a PlayStation or Nintendo, you know, whatever, because there you have a limited set of games. And when they stopped developing the thing, hopefully people stop developing for it. But with Z80, you know, it's been everywhere. And people have been developing. I mean, a friend of mine, he keeps telling me that he's a very big friend of Z80, and he grew up in the Eastern Bloc. And he keeps telling me about the the Soviet version of Z80. And I mean, there could be a ton of software out there that we know nothing about that does use it. Right. So it's true. Right. So, so, yeah, it's, I don't know. It's, but it's interesting. It's, it's, it's, all right. That's also why all emulators written since 14 years now, they support main PTR, because we had this documentation. And yeah. All right. All right. All right. And because I don't remember if you if you spoke about this in your talk, because, like I mentioned before the talk, you, you're the one of the first speakers will upload it your talk. So I watched it weeks ago, months ago for the other speakers. But, but no, because so the test sets that that nobody could pass, right? So how did that actually come together? Frank, when he wrote his program, he, he wrote tests for many type of instructions. And the test actually generated with both macros and at runtime as well, because a lot of code is generated. So there's a section of the test that runs the bit and HR instruction, which I've talked about, which is a bit testing instruction. Before the instruction, it means that it will be when the when all the state is dumped into a CRC, the flag register might be different depending on the content of main PTR. So if you don't know how to emulate main PTR properly, you don't know which instruction modified it and then beat and HR arrives, you're right to this instruction. And you won't know the real value that's in flags. So that's why no emulator supports it supported it fully. Okay, this is the last thing I'm going to ask about this because this is very interesting. But they don't like people like the side. Hey, guys, Frank is crazy. And he's his test set is just wrong, right? It's like, actually, they did for about 12 years for about 12 years, no one cared. No one care. They said, okay, yeah, it runs on hardware and doesn't run on my emulator. But all other software we know of works. So yeah, that's okay. That's right. But as I said, then some people with in the pursuit of knowledge, they wanted more and they wanted that they were fun of the specie and the Z80. So they reverse engineering. And yeah, Oh, sorry. Okay. All right. Let's see. Actually, don't remember that I asked anything except my own questions. And this is the reason you should be a dev room manager. You get, but not in real life in real life, the full manager is just it's boring compared to virtual. Anyways, a question that a lot of people liked. So why are you writing is that a team later yourself? And what are the hardest parts? There's also follow up question. Would you recommend that people write a Z80 emulator? And why and when should they do tackle what what do you mean? Like at what point in there? Yeah, okay. So should they start with a Z80 or should they start with I don't know, a lot of people like to start with a chip 8, which I'm not 100% sure what it is actually, but right. So It's virtual computers does not exist. It's like Bitcoin. Okay. Why myself? First of all, it was a learning project. So I wanted to learn it's my first emulator. I wanted to learn how how to write an emulator. I've written software for quite some time. I've worked with complex system virtualization kernels and etc. And I wanted to learn a bit about Rust, a bit about writing emulators. And I chose the Z80 mainly for nostalgia, because I had one of the machines that I liked when I was a kid. I had the Z80 as the main CPU. Someone joked that it was the first steam deck, which is a portable game console, the game gear. Yeah. So that's the main reason. Should you start with a Z80? That's actually a good question. I think if you want to write an emulator quickly, you should probably choose an architecture that has less instruction instructions. This one is a Sysc architecture. It has many instructions. It might be simpler to target a smaller machine. But again, it depends on your own motivation and your personal motivation. So you get to I think it's heavily personal. You get to choose what you want to do with your hobbies. Yeah, right. And very quickly without going into it, Rust or Go? It's a big question. Yeah, yeah. Let's say both. And I won't go into more details on which one you should choose. Yeah, yeah, yeah. All right. Let's keep it politically neutral for now. Okay, I have a traction on this. You don't have to. A lot of people might hate you for it. So if you like C++ or complex languages, maybe Rust is for you. It has interesting things. Even if you don't like complex languages, many people come from the Ruby community, for example. And Go, I would say, if you like the simplicity of Python coupled with C, you would choose it. But then it doesn't say much about the languages. So Go with whichever is useful for you. All right. Let's see. I keep seeing questions. I'm not asking them. I apologize in advance for that. Okay, so do projects like Visual Z80 help with researching the processors internals? Are they enough to understand everything that's happening in the processor? I have never heard of Visual Z80 before this question. So I don't know. I'm guessing it's something, it's a debugger or something. So I can't, I don't know. One important thing and which might not be a good idea when I've started to write this simulator is that I didn't want to look at others source code. I didn't want to get any inspiration from any other author. And it might not have been the best idea because there were some constructs which I didn't know about, which would have made it faster. But yeah. Okay. That's learning, I guess. So I actually did, when I first started writing emulators, this is exactly the same thing I did. And also instead of looking up information or whatever, I would just think that I was smart enough to figure it out myself. And I wasn't. So and now I admit this to myself and I'm a better man for it. I think or whatever. Sorry. Stop laughing. Stop laughing. Anyways, let's see. Uninitiated question. Are there any Z80 decapping projects? And if so, how viable would it be to find out about these hidden instructions that way? It's actually a very good question. I've thought about it before and I'm not sure I have a definitive answer. What you need to know is that if you watch my talk in the title image, you can see there are decaps. I've posted links into the room. And it's also one of the first links of the decap if you go into the references of the slides for the people watching the recording. There are very high-risk versions of the Z80 publicly available of the CPU decapping. As far as I know, there are no reverse engineering projects that use this decapping. So I know there are alternative FPGA implementations you can find, but I don't know if any project that uses these decappings. Alright. Let's see. Actually, I'm just... Never mind. So let's see. Another question. Are these secrets known to affect emulators compatibility with software from that era? Kind of what I asked. Yeah, I've kind of answered this before. I'd say it doesn't affect compatibility unless you're running ZXL, which now mem supports. So you should be okay. It doesn't affect compatibility as far as you know. There can be some software in the basement in the KGB somewhere that will not run with the emulators. As I said, at the beginning of the talk, this is the latest deep-produced. There are still people writing software for it. So yeah. Alright. Let's see. Are there clone detector or emulator detector? I don't understand this one. A piece of code that indicates based on the behavior what hardware or code is likely running. I'm not sure I'm qualified enough to answer this question. I'm sure someone might have written one on a forum or something. You have enough information in this talk to start doing your own. Yeah. And then a follow up to this is not a follow up question. It's a follow up comment. I think the talk from Thomas about static analysis kind of touches on this subject. So you know, and even if it doesn't should be here just to listen to it because you know, more people, we get more money or not. We don't get money. Let's see. Yes. I'm a 6502 guy, me as well. But when I see all those Z80 registers are marvel at the luxury. And this wasn't really a question. I just liked the comments. I just voted it and then know what appears here. So are you familiar with 6502? Naturally. I've looked at it a bit during my research. But I'm not familiar with it. All right. Let's see. Your emulated gears passes all the tests. What is your next projects? What talk can we expect from you next year? Good question. Great question. I'm not sure they will be a talk next year, but you never know. My next project is to finish this simulator by going into a rather than the CPU. Maybe it's a P or something like that. There are many things to emulate if you want to let your full machine and the CPU is just one part of it. Yeah. Are you still here? Yeah. I can hear you. I'm just playing around with the camera because you lose signal sometimes and I think your kids are downloading a PlayStation update or Discord is updating or something. So yeah. Okay. So this is a small kind of follow-up question for me because Roland in his talk, he started the project because he wanted to play a specific game, which ironically enough he can't play yet. But did this start with you something like that or were just interested in the subject in general? If I wanted to just play the games, there are many emulators available out there and I could just play those games in there. So that's not the same as what Roland did there. It's a bit harder to just access the original material. In my case, there are many emulators. So the main reason was to learn as a final goal is maybe to run a game at some point. Yeah. Alright. And somebody here is asking, can you play us something on that harp? It's not my harp. It's my daughters and I don't know how to play the harp. Sorry. Alright. Alright. We'll believe you. We'll believe you. It's a common excuse. Let's see. There's a question. I presume that being a microprocessor that uses microgaming, discover everything is like discover all the bugs in a piece of software. Sorry. Yeah. Yeah. It's exactly when you're doing reverse engineering, it's exactly that. So you have to discover things which you don't have the source code off. And when sometimes even if you have the source code, you still don't know everything. It's not maybe not documented. Discreen what's something created by other persons and it's basically like reverse engineering and documenting what you find. And we had a remark from the answer about visual Z80. Apparently it's Z80 that works directly on the gate level or something that's simulated. I don't know exactly how it's done. So maybe go check it out and I will go check it out. Yeah. Alright. Alright. Let's see. I will ask my lovely assistants. Did I miss any questions? I don't think so. Okay. Okay. Okay. I want to add maybe one remark to the talk which I didn't talk about is that the Q registers the most recent discovery by Patrick. It's not implemented yet in MAME and I haven't had time to work with real hardware even if I have some, but it seems that even it's not available in all the Z80 variants from ZILOG. So even this secret is still a bit challenged, I say. Exactly. Where is the Q register found? And yeah. It's just the result of my findings in a few forums. Alright. Cool. Alright. I do not see any more questions. Personally, I would really like to thank you for doing this. Thank you for being here. I hope you were happy doing the talk. Thanks for having me and sorry for my weird questions, but I'm nervous and I can't resist. Alright. Thank you very much. Hopefully we'll see you next year in real life. Who knows? Maybe. Sorry? If anyone has any more questions, I'll be in the chat for a while. So join the chat. If you're watching the recording, I'm sorry. If you're watching the recording, you can still come to the chat, but he won't be there, right? Because we just discovered the chat from last year are still open and we are still in those chats. So you can still come to the chat and you know, you can maybe tag Anis and who knows? Yeah, you might answer or not. Right. So thank you very much and have a pleasant Fosna. Thank you. Bye. Bye. Bye.
While writing a Z80 emulator, you might ask yourself: should it be 100% accurate ? Maybe not, if you just want to run old software. But what if you're not and this old software accidentally depends on undocumented behavior ? In this talk we'll explore how, 30 years after its design, a few people on a Russian retro-computing forum reverse engineered an undocumented Z80 register: MEMPTR. And how 6 years later, another even less known secret register was discovered.
10.5446/56856 (DOI)
Hey everyone, thanks for joining. This is going to be a gentle introduction to a tool called Pico CLI by the end of the talk. Hopefully you should know a lot more about it. We're going to have some fun with Kotlin and Star Wars as well because that's the Kotlin room. I would have loved to do that talk in person, but yeah, COVID helping, it's online again. My name is Ylang Roland-Bert. I'm a developer advocate for a company called Adien. If you haven't heard of it, it's the company that processes the payments of all the companies using every day, Spotify, Blizzard, just to name a few, SoundCloud, a lot more. I've also been named Kotlin Goerl developer expert last month. I'm really happy about that. We're going to be looking into Pico CLI. Pico CLI, very simple use case, one thing. It's a tiny library to create common line applications. I like their motto. It's called every main method of Pico CLI. We'll see that's actually pretty accurate. I really like it because of several reasons. It's very easy to use. We'll see it very soon. It's very rich, has a lot of features, and it works on all JVM languages pretty much, which is very interesting those days because we're getting diversified. It is also pretty active online on GitHub. If you look at the stars and the documentation, everything is really good. You have used Pico CLI without knowing it. It's powering things like Aadoop, Debian, it's on AWS, it's used by Microsoft. You have used it somewhere. If we take a step back before looking into Pico CLI itself, let's look at the reasons why I would want a common line application. A few reasons. The first one is, well, it's nicer than having this big Java mess of things. That's what we're going to use in the demo. But having something shorter, more concise, is interesting, not only for you, but for your team, especially if you do something repetitive. But also for other members of your team, maybe your product owner or any other teams. It's better to have this very simple SW. Second thing that I also like from user experience point of view is Java becomes irrelevant. The JVM is not relevant when you run this application. It's common line. Having a proper CLI avoids having to enter into the world of Java. I don't care anymore. One of the very nice things about CLIs, if you use them properly, is they can fit into each other. Everybody does sometimes a pipe grab, I think. This is the same ID, common line interface. They are for humans, but it also can be used as pipes to other things. For example, I could imagine a small utility that actually feeds code configurations to a QPCTL command. That's a very nice interface, tool for interface, that a GUI cannot do. Usually, the documentation is also quite clear. You don't have to go to a separate website. The documentation is embedded into the command line itself. Hopefully, it's quite descriptive. You have a lot of power in a very small thing. That's very nice. Last but not least, some of us, especially developers, we really like to avoid GUIs if we can. This is a bit of a joke, but it's nice. Users are searching for those things. There's a reason why you see all those big name companies. They have CLIs as well, AWS, they have the create CLIs because that's what people want. Why Kotlin? If I do a CLI today, I'm going to go for a go or a rest. That's very true. If you can start from scratch, that's probably what you should be doing. My code base is Kotlin. I have a five-year-old code base. It's all in Kotlin. It's all in Java. I know Kotlin very well. My team knows it. Everybody in my company knows Kotlin. If I want to be in that specific Ute case, then maybe Pico CLI is a very great utility. It's the best choice. We're going to be touching just a couple of those, but the documentation is crazy good for Pico CLI. You can have colors. You can do multi-platform, they generate documentation for you. That's very nice. The author is super reactive on Twitter, which I also really appreciate. We're not going to spend more time in slides for now. We're going to look into the demo. We're going to be going through a very simple use case, but we're going to be looking at, okay, let's go start with a simple use case. We're fulfilling it, but then we're going to try to improve our experience for the user. We'll be using the SWAPI, the Star Wars API, which is nothing else than a website. You can query, and it gives you JSON back. It does a lot of things. It works with the vehicles of the franchise, the species, planets, all of those things. We're going to be doing a simple use case. We're going to go for planets and people. Let's go. All right. Let me switch. For starters, a few words of introduction. Backgrounds. I said we're going to use the Star Wars API. We're going to be doing two things, planets and people. This is at the core, nothing else than a HTTP get query with some filtering and some query, some data classes just to get the information back. And so that we don't get JSON thrown to us, I have a pretty printer, which allows me to have things pretty nice away. And you can see that I can use Unicode. I can use formatting like bold colors, et cetera. But there is nothing. It's nothing else than a pretty print. If I want to start using Pico CLI, two things I need to do, the first one, as you would expect, the dependency. And then if you want the annotation processor in IntelliJ, then you're going to have to use the plugin. Right. Let's start small. We just want, okay, I have a main application. I want to run my command line. It's not going to do anything yet. I just want to run a command line. Well, I have my main here. Pico CLI runs with a callable or a runnable. You need a callable or a runnable. You choose one of those whether you want to return or not. And then you simply insert the command annotation. That's it. That's if we have the command line application. All of those are not required, but they are nice to have more information. We can insert the version, the name, description, et cetera, et cetera. So if we want to run it, there you go. I'm going to run my command line application. Boom, I'm running. That's it. We're in business. Now, as I said, I want standard help options. Then I have access to my bad to the version. I have access to help. And it's auto-generated. You see it in front of me. I have not done anything else than inserting a name and a description. Pretty nice, I think. Pretty nice, but not very useful. So we want to move forward. As I said, we're going to be running simple use case where I want to be able to query people and I want to be able to query planets. Of course, it doesn't make sense to do both at the same time. So that's what we're going to be building here. I'm going to be introducing two different things. The first one is options. Options are typically the things that you see with the minus in command lines. It's the minus H, minus V, minus I. So those options are, yeah, that's what they are really. Usually they have a short and a long name. That's what we're doing here. And they may be required or not. If you do, for example, FFMPEG, then you might say, okay, this is my input and then you might enter codecs that are not required. That's what we're going to do. Now, remember, it doesn't make sense to do both at the same time. So we're going to insert, we're going to introduce the concept of a group that is exclusive. So those two options are exclusive with each other, but I do need one. So I need either one of those, but not both and not zero. I think that shows the power of the tool very quickly. This is a use case that's not quite simple, but we're doing it in a couple lines. Last but not least, I want to be able to query. I want to be able to filter my search. And what I'm going to say here is I'm going to say, okay, there's going to be a parameter. So at the end of my CLI, there's going to be a parameter. And that can be typically, if you do LS to list the files in a folder and you do a dot or LS, you know, tilt to do my home folder, the tilt of the dot, this is the parameter. That's what you have at the end. We'll use two parameters. I'm going to use the first parameter and I'm going to say, well, this parameter is has a narrative just like for a database, it has zero, one or n, well, in that case, say, okay, there can be zero or one, maybe it's there, maybe it's not there. Once we have defined our common line in behaves, then we have to implement the logic. And what we're going to do is okay, we're going to say, let's look at which option has been selected. Depending on that, I'm going to make a API query for that specific type of thing. And then I'm going to pretty print it. And the rest stays the same as before. So let's go and run it. We're running the options here. Options and let's try a few things. We're going to try without any option. Options, right. Pico CLI, as usual, trying to help me giving me what I can do, but also telling me, by the way, you have to run one of minus zero minus P. So also giving me descriptive help, great for user experience. So see, that's what it's telling me, okay, you have to use C or P. So let's go with, I want to do character, and I want to say anyone with OB. Boom, OB one cannot be found one result. That's what we want, right? We're in business. I cannot do both, right? Again, it's doing this for me. And then if I need the help, it's going to print help for me. Right? Pretty simple. We've done exactly what we wanted. Very nice money shot directly. So that's what can I say. Now, this is all nice. But if you start using this tool, you kind of have to imagine a more complex tool. There's going to be a lot of options coming up. Maybe they're exclusive to each other. Maybe they are not maybe some are required. So it's going to get complex a lot. And if you used tar in the past, you may know that you're not tar minus X VCF. I always have to Google it myself. Things that are better in terms of your user experience those days, a lot of people are using sub comments. And that's typically what's used by the Qtl or by the GitHub command lines by for example, where you say GitHub repo and then you end up in the part of a tree that is only valid for repositories, right? You're not going to get some things about geists or about forks. I have a repo and I can view it. I can clone it. I can do those kind of things while it doesn't make sense to clone the secrets. So that's very nice because it helps me as a user thinking in terms of semantic behavior. And well, that's what I want, right? It doesn't have I don't have to learn. And that's nice. So let's look at how we do that. Same use case, planets, characters. This time we're going to define sub comments, right? Instead of doing our arguments and parameters, we're going to define sub comments. Two of them. This one is generic, right? It's auto generated. The rest stays the same. And what those are, they are literally just commands. Like we're basically working with a tree of commands that are interacting with each other. So this planet command, if we look into it here is literally a PicoCi command by itself. I could just, you know, have this one, have a main in there and then planets would work. But it's a sub command as we've defined it. And then the people is the same in terms of logic, exactly the same doing the API request, pretty printing, everything, all of the rest doesn't change. We need a parameter at the end, maybe. Yeah, and that's about it. So let's look into it. Let's run it. Sub comments. Two things. Again, I can run it like this. And it's going to tell me, well, you've basically just run the help. That's nice, but you want to be running comments. That's one of them. You want to search for planets, people. So it helps me. So I can do planets, right? Boom. It's going to make a list of all the planets. Very nice. I can do something with that gives me tattooing. So already I don't have to do minus P anymore. I just know like planets. Boom. I mean already, that's very nice. Now, because because he lie is nice and because the help, as we said, it's just a tree of commands by themselves, I can actually search and ask for the help of specific things, right? This is a very simple use case, but imagine it's complex. Then I can help. I can get the help of a sub part of my command line interface. Very nice. Okay. Well, that's all there is to say pretty much. And just like before, we cannot do things that are crazy, like planets, people that doesn't make sense. We have too many arguments. And again, because he lies trying to help us with the error. And that's one of the things that I really, really appreciate those days is I have an array of tools that I use every day. I don't want to have to learn them all and I don't want to have to teach my users, right? So if the tool is helping my user, very nice error description, help auto generated extremely important colors as well. So we are helped here to understand. Some of you might say, okay, my use case is nice. But my specific use case requires me to have like my options are not defined at the beginning. I don't know exactly how many sub commands I'm going to have. This has to be done to generate programmatically. Well, you're in luck because because CLI also supports a programmatic interface, you don't have to use annotations. Maybe you even just don't like them. I find them pretty interesting. But you can also run the same thing in a programmatic manner. And while that is a bit more verbose, and you're going to have to do the plumbing yourself, but we can do pretty much the same thing define the comments, common line, say that we want the standard options, define other common lines, right? And we're going to say, okay, this common line is going to have a parameter and then I'm going to build it. And at the end, I'm going to add sub commands. So this programmatic here interface is literally the same as the sub commands that I just had before. So I don't have it ready. But we could run it. It runs the same thing, right? And then we have an execution strategy. So you can have to go into the documentation a bit more. But in case you are, you want to do for loops, you want to do functional programming and generate things on the fly. I don't know if you have a big documentation and you want to run a CLI to, you know, query against a documentation, for example, your open API specs, why not? I'm going to show a last example, which is pagination, right? One of the things that we see here in if we run planets is that it speeds out, you know, 100 something planets for me, which is a little of a shame, right? It's not super nice again from a user experience, we're trying to help our users here. So I want something like pagination like I would have for a Git log, right? If I do Git log, then I'm going to go through pages and pages. I want the same. Well, actually, that's where I've seen the author helped me there. You can hook into pretty much everything, right? We can still use our annotations. But because CLI interface is available, so what we're going to do is we're going to create a temporary file, and then we're going to create a print writer. And then we're going to hook into the execution strategy I was just talking about a second ago. And we're going to hook into the output of the command line, right? We're going to set send the output somewhere else, we're going to pipe it to another place, and then we're going to read it. So that's what we're doing here. Let's see. Pagination, paginate, and then planets. Now have pages to work with, right? Super nice. There is not much more that I can say about that. I think that's pretty much everything. I will say one last thing. I will come back to it in my slides, but I don't want to come back to IntelliJ if I don't have to. You can run native image, like you can use Grail VM to actually create an native image if you want. This is the line that I'm using. Basically just making a fat jar, naming which, what's the name of the output file we want, and then boom, I have a macOS executable that I can just run. The interest of that, you can create releases, right? You don't have to package and then create bash files and then maybe you have something, you just have this one file that you put in your path somewhere. You can just put it online. I've put it as a release on GitHub. Click on it. You can run it. That's quite nice, I think, again, from a user perspective. Right. Let's come back to our slides. Right. I've shown a few ways to do things, but again, always, if you're creating something, especially a CLI, you have to think first about your user, right? Who's going to be using that stuff and how can you help him not having to learn yet a new tool? And one of the quotes that I really like is this one from Martin LeBlanc. He says, the user interface is like a joke. If you have to explain it, it's not that great. I really like that quote because as we've seen in the examples, we can do much better than what we would typically think of. I don't want to have to read the code. I'm already using 10 CLIs every day. And I think a lot of them are doing a good job today, so we can do the same. The expectations are quite high up those days. So here, subcommands is a good way to use semantics to actually give a lot of value to the people. And they might just randomly write stuff, try stuff out, and it works. Best practices as well. One of the things that I see sometimes is if you create your own CLI that's nice, try to use some simple defaults. If you can say, for example, default is going to be YAML on output, you can change it if you want. Try to use the default wherever you can because it allows the user to not have to think and to have to keep things very concise. That's much more helpful to write something like the left example than the right one. And another one is I see that sometimes there are like unwritten rules and defaults in the industry, just like in video games, right? If you put the jump button at the top of your controller, then people are going to be angry. Same here. Minus I, I expect an input. Minus R, I expect something to be recursive. So try to stick with the industry standards, even if they are not written anywhere, because you're breaking game design, you're going to make your users angry. Another thing I already mentioned is try native compilation. Super nice. It creates bigger executables, but they are much, much faster. On main machine, it goes from one and a half seconds to almost instant. Very nice if you can. All right. As a conclusion, if you want to start using PicoCLI, I recommend you search for a project that you use often. Something like maybe performance tests, like your release, or using this little tool to upload to a server, whatever you see where you have a script and you run this Java minus C or minus something, try to find this use case first, because that's the money shot. You can directly use it and it's going to be making people happy. And if you care about user experience, you might see adoption grow. Maybe your product don't know, we'll start using it. Maybe other teams will start being interested in that tool. So that's what's nice about PicoCLI. And try native compilation if you can, of course. It will make it not run on certain systems, but it will make it much faster on your own system. And I think those things are especially more useful on CI servers, for example. On the machines that run CI, if you have a few tools that will help people debug, they can just read, makes more sense. And last but not least, try to have fun doing it. That's what I always start with. Couple resources, not crazy, but the PicoCLI website super low-willed defined and well-written as well. So if you are one of the things I like is to have a single page, so you can control F on the whole page and then you're done. The source of this project is on GitHub. Another very good guy that I found is the Quarkus. Quarkus has a PicoCLI guide. And I think it mixes quite well with the spirit. Quarkus and PicoCLI, they mix together because Quarkus also is the real light. So they strengthen each other. If you need to contact me, there is the Twitter description. Thank you very much. I hope you enjoyed. Hopefully you want to try the library. You can find me on Matrix. That's how Fosem is going to do the Q&A. You can also hit me up on Twitter if you want. Cheers. I'll dive with the questions. They keep on flowing. The first one, the most I voted one, I've tried Kotlin XCLI instead. How does it compare to PicoCLI? That's what I was telling you a minute ago. Maybe you can fill in because you do have the experience. But when I was searching for a tool, I've seen Kotlin XCLI and PicoCLI. I like the single effect that on PicoCLI everything is on one page. So you can just control F really quickly. And I really like the annotation-based system because we use Pring everywhere. So that's why. And I was also dabbling with Quarkus at the time. And they really had an obvious relation with the Grail VM, a native compilation. So in short, very shortly, went for PicoCLI, didn't look back. I'm happy to have your experience. Yeah, on that side really quickly, I think that the perk of Kotlin XCLI and CLI is that they provide some more idiomatic way to do CLI with Kotlin like property delegates and other constructs. But definitely PicoCLI looks like the most powerful of all of them because it has extensive documentation, a lot of APIs. So if you have more complex CLI applications, that's probably the way to go. You do see it's not completely idiomatic. If you start having default values and then you can have, you start having to play around with null values and stuff, which is a bit less fun. But we didn't have problems with that. That's true. That's true. So let's jump on the next question. So on the CLI topic still, do you have some libraries to recommend to support the graphical like CLI applications in Kotlin, ALA and CURSES? Yeah, as I was saying, I haven't found something for Kotlin, though I think Matthew or Martin shared something that looks really interesting with Jetpack Compose because, you know, of course, Jetpack is a thing to be at the moment. So I'm going to have a look into it. The other one I looked into is charm. If you have a minute to try it in your terminal, it looks freaking amazing, but it's not Kotlin. So I don't know how much cross support you can do. But no, the short answer is no. I don't know any. Awesome. So the library is charm.sh. The link was also shared in the chat room if you want to follow up. Next question for Martin. Do you have any, sorry, the question is, can I make this work from a file main.kts? Well, I think you know the short answer for me is why not. I don't see any reason why not. I haven't tried. Yeah, I can follow up on that. Yeah, you can actually do that. I think I did it in one of my script. It's yet another regular Java library that you can get from Mavin Central and use it. Next question. Do you have any recommendation how to distribute CLIs written in Kotlin? Just ship the FATjar, ship a disk zip, put in an MPM package, something else? So what I have done is, because everybody was on the Mac around me, so we have shipped native versions. So I was just shipping the native executable. That's how we've done. Getting a FATjar next to it is also, I think, the next best thing for me, because it's just that's what people are used to most of the time. I haven't tried NPM, I'm curious if you have experience with it, actually. I do. It still feels wrong to me to distribute Android and Java libraries via NPM, mostly because, I mean, you can distribute the sources and you can use tools like include build on Gradle and let people rebuild that module. You will have implications on the performance of your build, but you have all the benefits of the build from source, so people can actually change it and rerun on the fly. So it really depends on your ecosystem. If you're running a CLI for a web app, it might also make sense to create an MPM package for that. It really depends. That's a good point. My experience is in a Java ecosystem for back-enders. So MPM was kind of out of the question, just because it doesn't make sense. Absolutely. So let's jump on the next question. We already touched and clicked, so I will jump on this other question. That is, sorry if this was answered before, but are there any plans to move from KAPT to KSP? I have a branch of CAPT Scars. And yeah, I can concur. To me, that's the actually KAPT support on Pico CLI was the thing that frustrated me the most. I had a five-year-old laptop and it was making IntelliJ completely freeze, literally freeze for three seconds. So I was looking into it. That's the first thing I checked last week. I haven't seen anything on the Pico CLI issues that says we're going to support it. So it looks like it's up for grabs at the moment. But I kind of agree. If you ask me, literally, I actually disabled. I know Pico CLI is not a disabled KAPT because the performance impact is just too much. So I think we are done with questions. We still have some seconds. I will just follow up on what Martin wrote. It says that MPM also has out update. But it does feel super wrong. Well, that's also the variable. Martin, maybe she'll join us in the room afterwards. We can talk about it. I think every package system has its own pros and cons. So it's hard to make everyone happy in the room. And we're going to be cut.
picoCLI is a small library that can be used to create JVM based command line interface applications. Within 30 minutes, we'll look at how to setup a project, create a small application and package it for others to use. picoCLI is nice for several reasons : CLIs are a great way to automate some commands we run every day. And because it supports Java and Kotlin, we can keep using our main language of choice rather than having to dive into bash or node. Finally, pico applications can be turned into native images using GraalVM, which allows for a nice end user experience. By the end of this talk, you'll have a basic knowledge of what picoCLI can do, how to set it up and hopefully ideas of use cases to run it with! This is a byte size presentation so it'll have to go fast! - I'll introduce the project in 5 minutes. - Then jump into a starter project and the basic features of the library around. We will compile and test it live. - 20 minutes - Finally, we will look into some of the advanced features, and check at packaging options for end users - 5 minutes
10.5446/56645 (DOI)
Hi everybody, welcome back to Plone Conference 2021, Track 1. I am here with Nikola who is going to be talking to us about sustainability and web development. Nikola, when you're ready. Thank you Andy. Hello everyone. I'm super glad to be here this year, live from Sorrento, but I'm missing the people who are not here. Firstly, I want to thank my girlfriend, partner in business and in life. I'm Nikola, I'm a front-end developer and I run in raw material. In raw material we work on Plone for consultancy projects trying to get the academic knowledge to something practical to get the resources to have a more sustainable web. Across all over the world, we are raising awareness of the impact carbon emissions have on our environment and global warming and climate change are slowing down. So we have to change our behavior to get a better world for the future. Global fuels are still the primary energy source and renewable sources are in the effort for the entire supply chain. So we as the key actors of our industry, we have the responsibility to change our sector and study what is our impact. Have you ever thought about digital sustainability? It's not like aviation where you have to burn fuel to get energy, to get work done, but still we are using a lot of energy. So we are having a lot of impact on environment. Can we estimate what's the impact of the web development and web industry in general? To get an idea of the big picture, we have some comparison to get an order of magnitude of that. Keep in mind that we should have like 20 billion devices in 2020 and that we can estimate Google search uses energy as a light bulb for several seconds. Internet generates emissions like the airline industry. Well we can compare the internet for like the sixth country for electricity usage. And still we have less than 3% of web site powered by renewable energy in 2020. So yes, from a global point of view, for energy usage, internet emissions are enormous. Website uses electricity in data centers where a web server is running, so it uses energy and runs a web server to publish resources, to run a backend, to serve resources to clients. And when transmitting network packages, we get electricity to transmit resources of any kind. And we have end user devices where we use power for batteries. There are the main focus on my talk because as a front end developer, I'm focusing on client software. But also as an industry, we have operations. So travels to meet a client, to commuting of developers, going to offices. So remote work is optimal in this view. We have the design because communication is always a central factor for communicating your project when you're building a green software, green service of any kind, from the communication so users can usually find what they're looking for quickly. But design is also the rivulet content. And a durable design means that we don't have to rework stuff. This is more workable for developers when we didn't have fancy eyes. And of course, we have the development because we are using devices, so we're using a lot of energy. Austin is a field where it can have a lot of impact and we can also cut a lot of electricity usage by using small attentions and less impact using renewable energies. And there are several Austin providers that claim to use only renewable energy. So check them because there are many. And Snerb, which is a big player in Europe, is among them, by the way. Is this greenwashing? Greenwashing is a practice of companies in which they pretend they are environmentally beneficial while it is in contradiction of what they are really doing. I have here an example from the Shell website. Obviously, they are producing oil products while their website is fulfilled by climate articles, sustainability records. This is greenwashing. They are having a huge impact on the environment, but they are pretending to be green. There are also web companies. They are pretending to be green because there is an Austin from renewable energies, but it is not enough to have a really green web. Greenwashing is a way to address the impact of any product or services. It is a carbon-free way to address the impact of greenwashing. And it is a way to address the impact of greenwashing. And it is a way to address the impact of greenwashing. The correct way to address and analyze the impact of any product or services is a carbon footprint, which is the total amount of gases you are meeting with an action in general. And you can achieve that with a life cycle assessment, which is a tool to calculate the global emission of a service, a product, a methodology concerning the war process from the beginning to the end of the process. And we are working on data at Romoteal to get a model for a life cycle assessment for the web. Practical speaking, firstly, I don't want to get you a list of do's and don'ts, something good and bad, but I'd like to convey what is important to understand the real problem. So we have the tools to acknowledge what to do and not to follow a guide. So what is really drawing power in our machines? The power usage is behind the processing tax that are on our CPUs. So the CPUs involved in any calculation we have, the GPUs involved in the painting process for the web and animations. And obviously, we have networking for transmitting packages. And we have the screen, but I'll leave it out because the screen power usage is barely constant. It is mainly under user control. Moving on rendering, which is the process we are most involved, it begins when the browser loads the HTML, then parses the HTML and the resources into the DOM, and parses CSS into the CSS one. This whole process must be efficient when you're loading resources because then you'll have the printing process. The printing process is the one in which the GPU is involved. And this is where the browser will render contents of the page. Our code must be efficient in the rendering process because it's the cradle of all the aspects we can highlight. If we put extra effort into the rendering process, we are affecting power consumption. To optimize power consumption, we should leave it computational, the browser is loading as much as possible. Both big usage and long-running tasks are damaging power efficiency. But here we are speaking about performance after all. This is anything new. If you're optimizing for the main performance metrics, you're also optimizing for power usage. The first aspect to keep in mind is to have efficient code for user interactions, even the loading of the page. Metrics like time to user interactive and time to first paint are key vital metrics to understand the amount of calculation in the rendering process the browser is having while loading your page. You can see that if you have lower values for those metrics, you have more work to do. When the page is not the frontmost content or it is the frontmost, but the user is not interacting with it, the tab should become idle. It should consume no energy. If a page transmits data infrequently, so maybe when it is not the focus tab, the overhead can become more and more than the actual network transmission. The goal here should be to get idle, the CPU idle as soon as possible. If you want to have a comparison to the car industry, you may have the truck or an idle car which is emitting only water. You may have to consume energy because that is what you are building. You are building something that consumes energy like a video game or something based on animations, but then the goal is to have always less energy usage as possible. If we get to check any computation, we see that no matter what is the technology you are using, but we can have green practices that I can show you now. The first one is to get valuable content UX because it is always important for the CEO also because when users find the things they are looking for quickly, they won't visit any pages because they are looking for it and they can find it quickly. The design, as I said, from the communication to the impact to the developer, it has an enormous impact on the final product. These videos in general, media, someone we don't always have to get those in the page. If we can reduce them, it is better. Otherwise, we can use asphiages, so vector graphics which are text. Another case is to optimize them to reduce weight. We have blown scales for that. We have also the lazy loading. The lazy loading will always have less impact on the first painting because we will have your content only when visible. This variable finds when possible. Otherwise, you can reduce the variations because often we load a large amount of variation when we are using actually two. And then there is JavaScript. I know that several Python developers here could say that JavaScript is the evil, is the cradle of our problems. But I'm a JavaScript developer, I have to defend that because the goal here is to use well our instruments. We should have the focus on how we are using JavaScript. I always prefer CSS-based solutions when possible. Animations are tricky. And you should use request animation when possible because browsers like Safari, so WebKit, have this automatically system to stop animations and timers and transitions when the page is not visible. With JavaScript, we should stop timers when the page is not visible and where the user is not interacting with the page. We also have the page visibility API to check that. Falling is another terrible practice. It is also a bad design example. We should use the sockets instead. And the goal is to always have the CPU idle when the page is in the background or not used. For CSS, I should recommend you to avoid importing huge libraries when you are using the container. And I also advocate for frameworkless front-end. So maybe you should consider teaming your website without bootstrap or semantic UI or anything else because often you don't have that much impact in the workflow. CSS can be heavy on rendering. While when you are using CSS grids, they are really heavier than anything on the GPU because the browser has to re-render any element contained in the grid to get positioning and spaces. And also the same pattern here, it is for Fluxbox. When you are using Fluxbox, you may have several nested Fluxboxes. So the browser, even with multiple layout in the action, should have to re-render and recalculate all the dimensions. But CSS will always be good even for animations. Use it with love. Sating websites and PVA are easy to optimize to consume very little. So it can be a good choice. And put in a cache anything you can. Light mode is recommendable as it will consume less energy for screen time. I made a switch for darken light mode and my raw materials website. And then I made it with two lines of JavaScript and a set of CSS variable. So it is neat and easy. And I recommend it. The always good suggestion is to keep the things simple. Do you remember computational complexity for the university? The simpler is always the better. Keep the things simple can make you have more maintainability. So durability and the ability to make improvement in time. I can never repeat it enough. Keep it simple. Though we can also have a cool acronym here which is keep internet simple as sustainable. But how do you know if you are in the right path? You can find a lot of academic papers and lists of good things and bad things or poorly documented tools. Academic papers are in my opinion not enough practical to get on a real project. And I'm working to have a tool to actually measure what is the energy consumed by your service. We can have comparison between services. But I didn't find any way to measure actually measure what's going on in CPU. We should consider the platform, the hardware and the peculiarities of any system. We have the Safari Web Inspector which is a nice tool if you have a Mac that can measure the impact on the CPU. It gives you some estimates and statistics on CPU usage and for the trading. So you can actually get some introspection about what is having impact on the page load. There is website carbon.com which is a nice tool from Wollgrain Digital. They are great. They are doing a lot of dissemination. So props. But I didn't find any documentation of how this number are determined. So it is a nice tool by the way. We sure have the Green Web Foundation. They publish many valuable articles about this topic. And they have an open database for Austin providers within Uniball Energy. They also have the API to interact with those data. And they are working for green Austin providers. They also have a lighthouse plugin. But to be honest, I didn't check it already. I discovered when I was in the train for Sorrento. We have the Web Accessibility Checklist from Mightybytes which is a useful tool. I'll leave the link with nicer and worse things. But we are at the conference, aren't we? If you take care of all the aspects impacting power usage, you can optimize the website to be green. Volta performance is an actual topic for this print. It is overall by far so good because if you tweak it a bit, you can achieve really good performance. There is Napoleon, we have the SSR, and Volta behaves well as the basement of our projects. And then there is our work to have the actual site project. And I saw that the site project with the team and customizations is the cradle of any problem you could have. As I said, I'm working on the images loading in the current screen. And the Volta team is working on performance. So if you're interested in this topic, please join us at the screen. We can discuss things together. Wrapping it up. If you want to have a sustainable web, we want to consider the rendering of the page and the task the CPU is processing. So we can have a way of developing digital products to consume as little as possible energy. That is to have the lowest possible impact on the environment. But in my opinion, when we're speaking about sustainability, we can't have only the environmental sustainability. You should have all around discussion about the sustainability of a service product or method. So you should have economical sustainability, so environmental sustainability. But you should also discuss the social sustainability. So we're speaking about sustainability for people. It complements several things like work conditions, the same opportunities for any person, gender equality, the decent work hours and salary, and community involvement. If an industry is not sustainable for the people who are working with EDM or for the people who are using that product or service, then it is not sustainable in the long term. So we have inclusivity for people of ethnic and ethnic roots, sexuality and sex for us for the gender pay gap. Overworking is a topic that I can have a talk about it. It is lovely that people are loving what they're doing, but you need balance. I strongly advocate for 30 hours per week, and I think that in an economical democracy, we should have the collectivization of the decisions inside companies and work groups. Community impacts is also good for both users and communities. Returning to environment, you could also, as I said before, plant trees to support the reforestation of our planet. There are a lot of talks about social sustainability. We can speak about accessibility, which is the same matter. Community resource is also good for both users and developers. Privacy of the users, community impact, as I said, technical depth, which is affecting our industry, and security of the services we're building. So we are speaking about an ethical way to develop products. We must be ethical with our code and the services we're building for both for us and for the whole society. This is something the food industry, and we can talk about it later. The garment industry is starting to have it, which is the vegan approach. The vegan approach is not about not having meat or animal derivatives, but it is about the ethical process and the environmental and social impact. So the whole process must be ethical from ground up and has to be transparent and truckable. The LCA, so the Life Cycle Assessment I said before, is perfect for this type of practical. So I'm presenting the vegan developer today, responsible and a world developer who cares about the people and the environment. The vegan developer definition, the goal for a more sustainable web, are in the Sustainable Web Manifesto, which is on the Roe material website. The goal is far, but we are doing research and development to achieve that. Here I have the photo of my fellow assistant during the development. He participates to every call I have. If you want to dig deeper and if you're interested in the topic, I'll leave you some links to dig further. At the end of the slides, I put the link to the slides with a QR code. So you can easily receive them. I will conclude by encouraging you to work for something that makes it better at night, to work for something that makes you feel a better person for both the planet and the society. Business interests only bring money, but everyone's interest changed the world. Thank you.
How much do we know about the impact of technologies we use everyday? How much the web industry is responsible for carbon emissions? Can we define an ethic development? I wrote the Sustainable Web Manifest and I'll tell you what does it mean. We'll look to the things that need to be fixed and the process to make a sustainable website. Finally, I’ll introduce an introspection in Volto issues and the future works as the result of my analysis.
10.5446/56646 (DOI)
Hi, welcome back everybody to day three of the Plone Conference 2021. I am here with Alex Medishan, who will be speaking to us about advanced citations with Otero and OpenAB in Plone 6. Okay. Thank you for the introduction. So, my working Plono is mainly concerning Volto and teachers, having abounds. Main I've been working with for developing the Otero citations and footnotes. And let's begin with a short reminder of what Otero is and how it looked last year. It's hard to see in other presentation. So Otero is a library where you can store articles and books. It also has an in-instructure. It has collections that can be nested. If you want to use this feature, the 7 that I created, you have to make sure to have a collection where all outside goes, which is what is going to be happening with our component or with our add-on. Yeah, we'll see about that later. So how do we use it? Inside setup, we have to make sure two things happen. One of them is the API part is installed. You can see the new piece. And then the second one, the add-ons configuration, you have to configure something. So, for example, the Zeperiara URL that has the user ID, API key, the default collection that I was saying, talking about, and then citations type. This can be either on your URL for the case that one of the desired citations style is not on the website. They have about 25,000. I don't know why that's so many, but yeah, in our case it was. Okay, so. So, how it was last year. For example, my simple article, presentation. Also, this is a slate item that is working together with the footnote. So, the other two on the right and the Zotero is the note. And the sidebar on the right and this pulls from Zotero, the collections and the items that we so previously. When we preview, we actually download it from Zotero and we can use it. This creates the operation and the index that shows the situation has been added. Let's have another one. Okay, let's have another one. And here we can see the actual reference block. We can see all the references like they are used to show example. One thing that we had the last year is showing the citation as a popover and it's the same thing. And the footnote. Okay, let's have another one. Another thing that we had last year is the possibility to search for example, if you don't know in which of the collections and articles or you know it's not in Zotero. We also had the open IE. We also get that API. We can see this publication data and research data. But we treat them the same way. Okay, so if we use one of these that we get from the open IE. So, yeah, you see it and it's the same situation that we wanted and this will be added in Zotero and we can use it. So this is first added to Zotero and then we request it from Zotero in the citation style that we want it. So this is a draw age. And now we look for it to see. So remember what mine was the collection name was where all outside goes. And the results in the second. And then here it's also not in the first time results. And then we can see it with the amount of minutes. Another thing that we had was footnotes. Now it's a little different, but only right next. And then it will be added at the bottom of the footnote. So this is working on top of footnotes. So the footnotes can see that everything is like a footnote so we see all of them together. Citation is what they use site and footnotes your own notes. So this is the tag element. So the other thing that we have that we now have that we didn't have last year was searching by after me. So by default does this, it searches everything we want to find with searches to type a part of the. I think. Let's get it. So this is new. Okay, and something so that was me. What we had last year, it was also search by boy. So just enter the very shows the element. So the footnotes are treated the same way this occasion. So the footnotes are treated the same way this occasion. So, what is new. So, as you probably saw that is a delete icon there. And this is because I'm not. So this is the main thing that I created this year is adding multiple stations. It's like another one. So one thing I added two of them that were already added. So you can see as a result. So you have that three indices that show where in the reference block, you have those situations and also in the reference book, the ones that are at twice, they get an A and B and the C and for how many times that. Otherwise, if it's only once it has a. And the dummy has it and then publishing with the Alexander operating and essentially that's where we stand up to choose also in the publishing. Okay, another thing that we can do also. Let's see. Okay, so this is by the way this is backwards compacted. So all that happens is that we're done before they work just as fine. So the way we did that is all the new references that you have an element or if you add them to a new element so an existing element you want to add multiple or a new element which works with you to it now with a new add on. Exactly the same. So it's basically all the get one references and everything else is in an extra. So it's a problem. Potential problem is. For example, we can do it. But the problem is what happens if you delete the first time. First of all, if you delete any of them, you see that the sum up has been changed disappears from the top over and the reference that's the arrow instead of the AMD. So, this is the first time. So this contains all the apps. So what happens if you delete the first one basically all others who were the first one in the extra tickets place. So, for example, Alexander loses the be just because the arrow, meaning the term one. So for existing. Nothing has changed because the object with the same name, the new, new, new, new extra citations are doing an extra pie so in an extra property. The same is for food. So, this is a footnote with price. But for example, now that I deleted one, you see suggest in the job. So this gathers all footnotes. Even if I'm referenced multiple times it will know that it's only one. So, we'll add it here. So they're all unique suggestions that you can either click or write, and it will know if we believe with the same thing. Just. So, this is the simple one that if you believe the first, there's a same thing the other, the new one takes its place. And it keeps the same structure you see that it's referenced and once and new lost it and doing. So, another thing that we added this year so it was the multiple situations are multiple footnotes, another functionality was the ability to copy and paste. Something that happens you see, dummy knows that it's the first, which is copied the Alexander, copywriting and Alexander copyrighted. And that's the same thing. Same thing is happening with the footnotes. If you place this, this is a footnote that may be in the sea. And this is all those footnotes and the terror. A little bit of a different structure. So, the work for this was done on the footnote. So, the terror in the structure of how you implement the terror add on. You see, you say it's depending on footnotes and most of the block and the render component is actually say it's saying that it's the one from the footnote so I'm actually going to add a group. So, let's add a group. Groups. They have its own, they can have its own context, let's say, and see it still works before that was easy. The easy part is just the text. And now let's make it more complicated and not another group inside the group. And just paste it outside. First of all, let's add the one that's already existing and see if it's modifies. So we see it says it's the first one and you see a group here. So, we have a group in the group. You see it says one. The first one is actually the second footnote and then he appeared Alexander and the big. So just in footnotes, we see that the only one. Okay, next, we will want to add a similar structure and let's see how interactive it goes. So, we put a title, which should be unpredictable. So, we put some text inside. Okay, so it knows that this is near. It says the eight. So how do these two interact. So, it says the last one, but if you close this, what happens if you want to reference the word that is in a closed to accordion. It's just a program and I like to work. Okay, so, another thing that we can do is, for example, you want to add some text. You want to use property that can happen. So, scenarios, which is, you want to write a text and then site some works, but you don't want to have all the situations that are not regarding that specific part of text. So, it's like creating with the normal. So, let's see what we can do. And one thing that we wanted. So, we will add another block of sickness. So, this, yeah, so what we see now is the footnotes blocks that shows only these two. So this is because on the right. Don't check it shows only this context. What can happen is the users. Yeah, users are moving the footnotes block and adding the team. In accordion so you don't see it. So right now it doesn't show anything because we have to click the entire document. So, so, so, so let's say we. This, let's say we want to show that first. As you can see, it opens the, it opens the It opens the accordion and the other way around. If you click on the station it opens the references. So, that's really what I have for this presentation and other things. I didn't show. There were some changes and some adaptation is to be done it also works in metadata. So, that's what I'm going to do with that. Thank you for providing us with such an interesting talk about advanced citations. If you would like to continue the conversation, I will post the jitsie in the track one slack channel. And thanks again for presenting today.
Addon that uses Volto Slate plugin, to allow users to make citations by referencing documents in Zotero library and OpenAire.
10.5446/56649 (DOI)
Welcome back. With me is Fred Van Dyke. And Fred, I think I first met Fred on a, on a bar. Oh, it's a boat, a boat and a bar at the Bristol 2014 conference. I remember being very impressed at this guy who was very geeky and technical and could speak at length on all sorts of issues, as long as they were technical. Fred is a great contributor in the Plum community. He's also a great singer and tab dancer. Sadly, you won't see this unless you want to pay. If you sponsor the Plum Foundation, I will make sure you get that video. So Fred is going to be talking today on a very new, well, a relatively new tool for Plum migrations and a new use case for them. So with that, Fred, please take it away. So thank you, Kim. And no, you can't be bright, right? Not for that video from the pre recording. So hello, welcome. My name is Fred Van Dyke. I'm presenting collective export import. If you have been watching this track for the last hour, you might be thinking there's something wrong because the previous talk was about exactly the same subject but then by Philip Bauer, the creator. Philip and I, I've been helping Philip, together with my colleague, Mauriz and other Plumistas to get export import into shape. And we thought exporting and importing and migration, you can never talk enough about it. So I will do exactly the same talk again, but then in 30 minutes and in my way and not in Phillips. So you can hopefully get another, but same view on this. The excuse for this is that we found an extra new way to use collective export import, because with the fixes we've done in the summer, we can also start using it for staging content between clone sites and that has always been a bit painful in this system. So my name is Fred van Dyke. I'm from Zest in the Netherlands. We are based in Rotterdam. Well, I'm actually still in Rotterdam because since COVID, our team has been working worldwide. Because of this talk, I'd like to give you a technical conceptual walkthrough of export import to know what is where the details. You should have already seen by Philip, but to just show you a kind of roadmap and place on the on where to put what I'm going to do with me more conceptual which is, well, the main problem I had in the beginning was understand the ordering of steps. So I'm going to show you some banana peels. I will show you some frustration of mine. So you don't repeat. And I will demonstrate this advanced use case to export import, which we did to be able to stage some content. Migrations can be fun again, Dixit P Bauer, sort of in another universe. I mean, we've been doing blown migrations for like like 15 to 18 years. A plan has been constantly in new development you have to, you want to stick and have the new features you have to upgrade. We've had a lot of migrations in the past. And the same people are created a very cool nice in place migration and has been championing that for quite a few years. So there are some things there that make migrations, not fun again so that's maybe what Philip meant. So import is a new way to export import Philip has already said it. We are standing on the shoulders of giants. We use plan rest API and it's Jason the serializers. So we walk over the content called plan rest API serializer on all content items, save them basically into a file. And when we go to the other side we import those Jason's again called the serializer, and we should have our content. So that's what basically Philip explained. But please note, create an empty package 11th of February that's 2021 so collective export import is rather new, but we had it. So how I found about it I started a migration in autumn of 2020 with an in place migration. It was a plan for three site but probably there was data in there from plan 2.5 or plan three. So that's all the usual stuff that you should do with an in place migration clean up first strip your source site using upgrade steps, then you move data as an blobs from the strip data of as to your version where you want to migrate to plan five to run that to be be convert to convert the data of us from Python two pickles to Python three pickles. And after that you can also run the plan up content types upgrade which is part of core clone. And thus you find a banana peel and another banana peel and another banana peel, because because we are still kind of migrating the actual data which are pickled Python object instances. There can be some surprises there there can be some old metastorage still in in tools in clone, and all those things are migrated and might trip you up. They don't have to but they in my case they did and I just kind of cancelled the migration in December to be to take a break. And around February I saw Phillips post an announcement and was like, maybe, maybe that might be interesting. So I looked into it. I put a bit more and around April May I started after some first contributions and testing it. I started using it for the migration I gave up with on the in place migration. And part of the problem I found with collector with with an in place migration is that when you have a larger database. This the time to run the steps becomes increasingly larger and larger. So you get a very slow feedback loop. And so drawbacks unknown unknowns you don't really know what's in your database if you have a completely default clone site, it's fine. But as I said data structures and conflict from older add-ons. And there are numbers that are no longer in there and they break your migration flow. And it's for me, part of my personal frustration is that it's not possible to really estimate if you're done until you're done. And you keep finding banana peels and when the banana peels are there. Well then you're finished. Right. So the algorithm side it works the end user are happy to start editing content, and then the first trace back start coming in, because you never know what's still looking deep down in your set of the be, which is calling on a new code. And that was something we had in previous might also in place migrations for example in plan for from archetypes to this charity and also from archetypes to plan five. And then you can place migration and guest to make the ball iterations on banana peels without a clear view on time needed for a way forward the age of anxiety. Another drawback that I like to point people is that you require an intermediate part in two parts and three compatible environment in plan five to to be able to run set of the be convert. So you can put part in two part in three compatible. And then when it's done, you can strip away all the parts and two parts and three compatible stuff which is part of what we're now doing with Plane sex anyway, but this is also something to consider. And what if you want to migrate beyond five to then you have to do those steps as well. All the bad shouting about in place migration. It's not bad. It has had a lot of work to make it stable. It works for default plan sites. It works for the default content types. But who here has a default plan sites. And how old is it and which editors and webmasters abused it with add ons and not in an installed order. I've seen for example, a trace back on some clone to discussion objects, somehow popping up, and then the whole migration was part so it's not right it's not wrong but there's something better. And our migration power, the power was right, because he created this. So what do we have we have an ETL transformation. This is the basic theory of an ETL transformation, you extract something from the old site. You store it, you then transform it into something more suitable and then you load it up into your new environment. This is by the way the same thing as transmogrifier has been doing for years. So transmogrifier is also a very nice tool we have been using it for migrations. But I must say my colleague Maritz figured out how it all worked. I never really did. And I must say the the collective export import which will, which you've seen already with Philip is much cleaner and simpler in its concept. If you want to convert or look at ETL transformation to collective export import. This is collective export input we kind of skip the transform step. So from old we export all the content into JSON files. And then we load the JSON files back into the new site. And until we're done with catching all the exceptions, or you want to have your own fixes again on the new site you can have some browser views or other upgrade steps that still fix your content in another way. And the beauty of this is is that you can do it on the new site only. So out of the box from free archetypes, when we export the content, we have an extra checkbox which Philip showed you that you can already fix some content so it is in actually dexterity, the new basic base content type system already on the export. Then we load it up again. And you can customize these steps with the dict hooks that Philip already showed you, and we're going to see another example of dict hooks later on in the second part of this presentation. This is what we're still working on we're still trying to improve collective export import. And then you have the second load step and then you run fixes afterwards, but what we'd like to have at least for all the default content types and default situations in problem five to classic. We want to have the portlets and another sub-museum maybe mosaic layouts and other things fixed. And that's still ongoing work to get those not in an extra step that we're now doing manually, but we're going to feed those things into the load step. So that's the benefits of using well extract load and maybe fix and doing small transforms on the extract and on the load. If you don't need the intermediate Python 2 Python 3 environment. So we can use the same principle later, if we are not migrating from 3 or 4 to 5 to, but we're going to plan six. It's collective export input itself is just one add on which you install in your own old site and that's that's it you don't change anything else in your old site you're not going to change data there beforehand to clean it up or remove and remove unwanted add ons. You can export the data in any order. You can export the content and export the metadata on the content and what I didn't mention here you can also leave out content types. And that's an important thing. It's not an all or nothing. You can just say look, we've only got five or six collections in this old site. We're running in all the work and the possible risks of what's happening there if we can recreate those collections in 10 minutes on the destination side. So if you have some use cases with some content that's still left there, just leave it out if you have a kind of content type that has a new completely new dexterity implementation and it's only three or four or 10 items. You can choose not to export and import them but just create them when the site is done. And then you can also use the data to create the data for the freedom you have here much easier. You can write a fixed code in Python three because we only run the export on the on the source site you don't have to you can of course also if you're accustomed data then you'll have to do some Python to export stuff because otherwise it doesn't get out. But for most things you can fix up things also on the last stage. And I must say it's very satisfying to see your site in in plain just text files. I always had this concept of soap being a kind of black box and you put things in through the through the web. When you started with as a newbie with flown. And then it's in there you can't get it out unless you go through the web interface again. And this does make me okay. It's, it's just text. Everything is just text if you want to. So technical tech bits, it was mentioned. Collected export input is not big because we have those serializers and serializers. It has been battle tested by the whole photo front end project because they have been using those exact serializers and these serializers for years, when they want to export a content item to the to the front end runtime. But there is one thing here and that is that we have to do to face export import. We are first exporting the base content tree. And then we reply metadata because the elephant in the room here, which is always with migrations is you have to check in the neck problem. And the check in the neck problem is on metadata. And you start saying, okay, just export an item, export another item, and that item is a folder and says, look, but I have a default page and the default page is an item. That item is not there yet. If you want to order all the items in a folder all the ordering has to be there before you can do the real ordering. So the relations between items if you refer to another item using relation. If the other relation is the other item is not there yet you can start trying to create something in relation cut a lot with functions, but it might say, hey, look, relation to what local roles ownership portal. So that's why we have list is to face export import. We first import the content, and then we kind of import the other metadata things. And then I'm not sure may June summer by Philip, where we, where you figured out a way to export all the content in the right way so that it can also be created again in one step on the import. Before you first had to export first or folder ish items. And then all the folders items so all the branches were there. Then you could import the leaves as well. And it's just finding the right order starting with top and then having a smart way. So we have now a content tree export, where you don't have to export all the content items one by one, but you could just export them in one go and algorithm will figure out how to do that. Part two, because of this now content tree, you have just one Jason file with all your content in there. Philip already explained that you can choose to have also the binary data in there as well as base for 64, or we can with various large sites on migrations. We can actually store the block path key there, you copy the blob storage and then on import you look at the block path key and you recreate the block connection again from the metadata part in a set of the be back to the blob storage that saves a lot of data. But I still prefer for this particular use case to have it in base 64 because I would like to copy some staging content from one side to another. Use case for my customer. We have an international English website, and they want to launch new marketing campaigns, but the whole setup so far has been that the subsidiaries worldwide of this customer so long the year is all delegated they have their own website they have their own freedom to do things, but still, you want to kind of avoid doing double work. So if you have separate sites that were launched at different periods in time. It's it's difficult to just copy content around until we start recreating it. That was like, hey, wait, what if we could export the content from the main site where we create a folder with the landing page some easy forms back on content and pages, we can export it using collective export import to a JSON file, go to the subsidiary site, import the content again, and ask the local subsidiary marketers to translate it adapted a bit to their local use case. And we're there. So it has been tried before we have a lot of different solutions for this. It works best without any coding at all to have. If you want to do this to create a kind of portal sites with subsites, because then everything is in the single set of the V. And then it's, it's rather easy to do a copy paste from one part of your site to another subsite. But as I said, if you're historically coming from a situation where city years have been added and added and added and you have the small separate sites approach for your organization. Then this, this is, this is a very interesting thing and I said with content tree support, we can export part of a site to Jason, and we can then import it again in another site. So, the little problem we had with salamdia is that we also provide them with mosaic layouts and mosaic layouts are a feature that was not taken into account yet by export import. And especially not because we were so smart to use persistent tiles persistent tiles are not bad they are a model, but those persistent tiles store their data somewhere else. A little bit about mosaic layouts so you have a behavior on a content type, and it's works just like the photo blocks behavior. And it adds a few fields, and one of those fields is the content layouts which is just a structure of this. And the diffs refer to tile types by a link, you will see an example data. And the tile data so the data displayed in the tile just like the data display in a photo block is either stored in an URL encoded part in this main content layout, or it's stored on the context of the item itself. That's what basically most of the blown up standard tiles tiles do. Or we have a persistent mapping annotation on the context or to the night. So when we first started testing this, all of our normal content actually was already migrated even easy forms were just migrated and exported and imported by collective export import, because it just loops over all the schema fields in the in the schema. But the persistent mappings were missing. So this is how in the content layout persistent tile is mentioned. So we have a diff with a data tile that is of type blown up standard tiles HTML, and then there's an ID following to the orientation. So we asked my colleague marriage, also known as our release manager. Can we fix this, can we use the big hooks that Philip has built into export import and do something there. And yes, we can so I'll actually try and start a demo video I hope it starts also playing. If not, I'll just minimize. Just play. It doesn't go there. Somebody very smart said 30 seconds before I started the presentation, didn't you test. No, I didn't. So we'll just do it like this. There it goes. So you see here I have this is on the one of the testing environments of salamiya. If you develop long sides always have a testing environment, and we have a mosaic layout here, which is created on the salamiya com testing environment. And we're going to the export content view. You can select the content here. And of course we leave the included blocks as base 64 encoded strings so we are not going to use the block path trick because this is not a huge migration of 80 gigabytes block storage is just a comment of a few items. And we export a little trick we export the Jason not to a local file on the client, but we export it to a shared folder on the server. So exported them to the shared folder on the server. That's the path. And now we can go to another site so we have like 20 sites. Also bad practice but historical we have them running multiple sites in some soap service so that's where the shared directory is coming from we share this export and import directory with four items. Then we go to the import content view, we select our marketing campaign, and we import. So we've imported four items that's correct. And now if you look in the folder. There should be something there. So as you can see only exported the content tree now so for example the default page that was a mosaic document with a mosaic layout there, and it didn't have to be full page here so that's something we now have to manually fix up. Then when I go into mosaic layout, you see it's there. You can just edit, you can just do it, it works. So that's the basic demonstration. And this is in use now by salamdia on some of the sites to export some of the marketing campaigns also already in production. But there are some so how did we pull this off. We have created a dig through on export. And we just we check if the layout view has been set and we check if I layout behavior adaptable is also available in the context. And then we loop over all the annotations we see if the orientation starts with long time of data. And we export the whole pile data with Jason compatible value and just compatible value is coming from plan rest API as well. That's a bit where the magic happens so Jason compatible as some adapters converters, but we had to read register them as well because the context is not a content type or the context context in the part is a persistent mapping. So we have registered three adapters now for the most common use cases rich text, name file and name image because those are few types that are not just basically functioning but just saying okay field is value. And we do an extra trick we add an extra item to the convert to the to the dictionary in Jason on export, which is converter so we say look converter this was a rich tech that's now in the Jason. Then when we import, we create a new persistent mapping. We look at that converter key. And then if the converter is rich text we imported Jason and we have a sub other function that says weird rich text, you can see it on the right side. We have create an image and we have created a file. And we restore the annotation on the context. And that's it that's how it works. So, but we would like to have is is a more generic system here so the limitations here are that not all few types yet are supported here so we're kind of with the dick hooks are manually fixing the x, and we would like to have a better adopters for all the fields there. And another thing we bounced into is that we are exporting from an English site but for example we're going to import on a Greek or a Spanish or Polish or whatever site. So we just might be look what the default language is and we set all content to that one so that after that the content editors can translate the items. And another thing we noticed what we need actually is we have this converter key now, it would be nice if we know exactly what the schema was on the persistent mapping of the tile. So we have this extra mapping now that has an occasion ID and maps it to the title card, we are exporting that. And then we can maybe also with the extra adapters completely go schema based and then we're almost there of doing this. One important category of field that's still missing is relation value and the relation list, which we are working on now. So at more adapters, we have already a lot of these converters, but but we need to add them. And it's not, it's not important. It's also this problem is also here in portraits. And it's also in in photo blocks and other things so we have multiple places where we have to convert the schema. So possible. Some tricks. Oh, sorry, that's not working. How much time do we have left. Oh, that can be on. So chicken and egg for problem of metadata what we turn is when we export relations on tiles we run into the same issue with we have on normal relation related items on context. We have to fix some stuff there, going to wrap up. We're going to create a generic bundle format for exporting content with all this metadata and content tree ordering right, then could we get it back into. Could we get it back into the system in one go where we separate the metadata from the content tree. So experience, it's a lot of low level work. But the actual data in a clone site has become much clearer for me. There is apparently a right order to first export the content and then reapply the metadata again. So using extracts and form load you have a much faster feedback loop on content that's there. You can first export everything and then do suggestion to import stuff again and once it works, we can improve the tool for everybody which is what has what's happening now in collective export tips and tricks, keep a log of which order you did things. Also with output so that you can verify 100 items in or items out. You can create a helper add on for your project where you can override the default ones what I did I just forked collective export import in a branch made the fixes there and use that in my project. There's some really handy tools here, because we're not there yet on default collective migration help us can help you to have some fixes in the third step I showed here in here, but those are being moved into collective export import. A very nice tool is collective search and replace because you can search for old table classes for example in from three or from four sites and use rex to replace all the rich text there. And relation help us has a very cool control panel where you when you import relation you can actually expect them and see if they're already there. One thing there don't bother to import the link integrity relations. I did that for multiple hours but you can just call it the link integrity information in your target site. And then it's all there back again. So mind your step. This is the ordering what you should do. Check this space also very important. So if you have a 10 gigabytes import then temporary files get created, we have to figure out if that's not necessary or not. If you use the block path trick and not not of the blob things are created as some code in in clone now that does this on the on the base for imports. And the biggest thing he initialized the site if you have not selected the default language yet and you start importing content that has this language field, then the basic clone validators will barf with you say you can't create a content. That's it to be done. I think Philip already said that there's a lot to be done. We could use a lot of help. And this is not done for photo yet. That's also something we'll have to see on the next project, if we can go directly from a phone for site to a blog for site to a blog for site. So we'll have to fix the folder spaces we have to convert the rich text, and we'll have to convert collective cover mosaic to blocks. So thank you. Those are amazing tips and tricks. Sounds like the paint the voice of painful experience is going to be used again and great productive ways for the point community so thank you. Please join us in the jitzy. Thank you very much Fred. Look forward to seeing what else you're going to do for fun because man you don't do enough. Thank you for your attention. Thanks game for presenting.
collective.exportimport is the latest addition to our migration toolbox and we have achieved good results with it with upgrading Plone sites to Plone 5.2. But A new 'side' use case with this add-on getting mature is distributing content trees between existing Plone sites. For example to create an online marketing campaign and deploying the setup to several country sites for translation and local use. I will demonstrate the 'content copy' use case, discuss current state and planned/wished improvements. As a related subtopic I will also touch on current capabilities and 'caveats' of exportimport when using this for migrations based on our current experience.
10.5446/56651 (DOI)
Thanks everybody, we've had another wonderful day here and everywhere around the globe. And let's start the lightning talks. We are going to start with Peter Holzer, who will tell us all about his first clone sick site. Peter, take it away. You have five minutes and I will air horn. What is I have? Wait just to know. Okay bye. Okay, so hello everybody. Yeah, I wanted to share my experience doing my first clone sick site with a bit of e-commerce and it was in April 2020. So a friend came up to me and said he wanted to have a consumer shop for his greeting cards. And I thought, yeah, all we have everything. Clone, we have a shop, we have mosaic and collection filter, you'll see later. And just returned from the plan where we finalized our plan to how we would seem clone six. And yeah, maybe I show you a little bit. Screen on. So this is the shop, the star page is basically a mosaic page. We have a slider. We have some kinds of content listings here. Yeah, add to car buttons and other stuff. We have bookmarking. We built a bookmarking tool that saves your favorite. So Katia used it for the back end part for React projects. Then we used the product shop, which has ready made content types in there. The overview pages are also mosaic pages with collection filters. And we have even more filters based on the indexes of the products that we have here. Nice little more function. Also have a nice detail. The navigation is done with collections. These collections also are mosaic pages with collection filters that build up the structure. We have related items that we show. So what we do, we created a new plan theme on bootflip five. We updated all our shop stack, including Yahooville, which is used for the checkout forms. We have mentioned bookmarks. We have a product shop, which is a use case product to show everything in place to start the shop. There's also a discount package, which allows you to add discounts on card or items on sections of your site for users or members of your site and so on. How much more time do we have? We also started to build some rest end points, I think is for the order data that we have in there. Lately, we added another payment integration for Wally. I'm still surprised that not everyone knows about the shop, but we are kind of bad in doing releases or we are good in playing the imposter story. But we will meet next year in March or April around that time. And we'll be sprinting on commerce or media plan job. So if you're interested, have opinions, feedback, or just want to work with us, join us. And we will be finally hopefully doing a release and it will be plan six only then. Thank you Peter. That's such a shame I didn't get to use the air horn yet. But I'm sure we will get to that sooner or later. I should move back onto the bombing cross here because otherwise everything goes wrong. And next up we have Michael McFerrin, which is the thing that everybody has been waiting for, namely web developer confessions. Really embarrassing stuff that people voluntarily gave up on this anonymous, maybe not so anonymous because it was Google form, but maybe anonymous web form. So Michael, please take it away. Michael, you're muted. Yeah. Now I'm not muted. I have a confession to make. I can't find the mute button sometimes. I made this presentation and I'm really proud of it. So I want to share it with you. One person admitted to using Pearl as their CGI, as their gateway interface. We don't know who owned those Pearl scripts, whether it was a root or a patchy or maybe in himself. So just change them out to 777 and you can get everything to work. Yeah. I did some research here and found out how you can implement Blink if you want to. Yeah. Because it's bad that they took it away. These slides will be available after the presentation. Yeah. Sometimes we have to do this because we need to access people's accounts to fix their stuff, but we do take precautions. It's important to put power into the hands of web editorial because we have live. We have to do things too. We have sleep and video games and stuff. So it's good that other people can do work. Yeah. This slide kind of speaks for itself. There was a great talk today about importing into Plum Sites and this was not covered. It's a way you can do imports. I don't know who this is. His name is Root. This to me is not a problem. I included it because it was there. Hola. Sometimes we can't find what we need to edit. This is why we get paid good money to do our work. I don't like changing something and seeing it affect something else. I agree with this. True stories. At least they were true on a Google form. Eh. Um, downtime? Downtime is good. Yeah, custom folders are really, really nice ways to shoot yourself in the foot. This is, yeah, but you can get the work done so quickly. This was a good tip. I didn't know this. And I'm going to use this. Yeah. We'll have to figure out the air horn. Otherwise, I'll just shout. Next up is Fulvio Casale, who will be talking about blown sponsorship. So if you need to atone for all the sins that were just shown in Michael's talk, this is an excellent way to sort of like relieve your conscience and pay all your sins away. The Catholics have done this for millennia. It's, it works guaranteed. So if you did any of those, we have the solution. Fulvio, take it away. Can you hear me? Can you see my screen? Yep, you're good. All right. Hello, everybody. I'm Fulvio and I am here to encourage everybody to make rage donations. And just to clarify, what I'm talking about here is not the conference of sponsorships, but the kind of sponsorships that you can make on a regular basis. Because barter economies are nice, but in the real world, sometimes we also need hard cash. In open source, contributions in labor of love are like gas in the tank, like everything you do. Basically, day in and day out, writing documentation, writing training, getting training, writing code, helping out in teams and so on is the labor of love. And it's the gas that we need to get the car to move. Because without the gas, the car won't run. And thanks to you, we have a lot of gas. We've been running for 20 years and still have a long range in our tank. And if the tank's too small, we can get a bigger car, we can get a truck, we can get an airplane and pollute the atmosphere with greenhouse gases. Sorry, wrong talk. But sooner or later, gas isn't enough. You also need an oil change, there are parts, insurance, registration fees, and so on, which require hard cash. If you want to see how we spend the money in the Plone Foundation, you can see the annual financial report at the annual meeting this week. But in a nutshell, this is how it works. On the right, we have administrative costs, we have to pay for hosting, we have to pay for Google Docs, or Google Workspace, or MailGun trademarks. Sometimes we have to hire lawyers, we have to pay fees, membership, and also we sponsor other things like other conferences, like the Python Foundation and so on. But in return, we try to give you a nice car to travel in, like with sprints and conferences, and we pay a small stipend to release managers just to make our lives easier in filling the gas tank. Now here's the question, and there's a poll in the slide on the right, and I will really be interested in the answers. Do you support independent media? Do you pay a monthly subscription to your favorite artists or your favorite podcast using platforms like Patreon, Substack Medium, YouTube, and so on? Please let me know. Maybe you just pay $1 a month, $5 a month, whatever it is. If not, ask your boss. They probably do. And here's another question, have you watched or listened to the Plone Newsroom or the Plone Podcast? And I rest my case. And that's all because I want to give you the satisfaction that comes from being a financial contributor. And we have people who do that already, that they are listed on this page. And those are individual sponsors. And also want to help companies to sponsor Plone as corporate sponsors or as university sponsors. There are different sponsors. There's a subset of sponsors, which is providers. Not all sponsors are providers. There are some companies who are not technology companies like there is a cleaning company that just loves Plone and they use it for their website and so they want to help us. And so they keep giving us money and we're gladly accepted. But if you have, and as most of you do, if you work or run a company that provides Plone solutions or hosting or whatever, you can also be listed on the onplone.org, which is kind of gives you a little SEO. All right, final thing, contact me. Don't contact him. I've been doing this for a year. And leave him alone. Thank you. Okay, thank you very much for that inspiring talk. And yes, this would be a great time for you all to sponsor. Next up is Dylan Jay with his talk how Drupal won down under, which I think means the continent down under given where the region where he's from. So take it away Dylan. Cannot share screen while another person's participant is sharing. I'll share my screen. Stop sharing. It's definitely over sharing. Sharing. Okay. Just stop. Okay. Is that working? No. You're five minutes are starting now. Okay, so I really enjoyed red turtles presentation the other day and it was interesting seeing how many government websites are still getting done with Plone. So this is a couple of stories from the governments that we work in which is UK and Australia and how other CMS is really one the day, which I thought could be useful, particularly now we have a kind of opportunity with Plone six, maybe to to gain some ground back. So, New South Wales in Australia there's a company called previous next to a Drupal shop they, they got the contract to build the main portal for one of the states there and 2009. And I think this is the really clever thing that they did which is they launched something called a gov which was a distribution of Drupal specifically tailored for the Australian gov so there was a Drupal gov, which was, you know, like a distribution and a community around government stuff for Drupal but they specifically made one for Australia. I think it's somewhat similar to what happened with Brazil, but they open sourced it and they launched it with a big launch party and everything and talked about how this was the Australian government CMS. That's open. And all the wording and stuff was was like that came with example content. And particularly focused on wicket compliance and having tools for allowing checking wicket accessibility, etc. And that was in 2012. They also organized a Drupal gov conference in Canberra which was the is the capital. And the previous next then launched, obviously got the contract to launch the main portal for the federal main central government. And then there was talk of the government proposing to standardize on Drupal as a technology so their idea was that we're not going to just go with one provider because it's open source we're going to have multiple providers but we're going to have one technology because that's going to save us money. In 2015 they launched this CMS so effectively they forked the distribution, but along with this came a hosting platform, which was backed by Acquia and built on their cloud factory system. And in fact, a whole most of this stuff here couldn't have been done without Acquia. And what they do is if you don't know is Drupal sort of big behemoth company that offers and what they do, which I think is clever is they offer support and backing for smaller companies so you could go in and bid on bigger projects and say well we support it by Acquia and someone told me very early on when I started working with governments, you know, become big organizations like to deal with other big organizations they feel comfortable that way. So they have, you know, you can deploy the distribution yourself, they can host it yourself. There's multiple options there. So the second one I want to talk about is the UK, which is what they did so they originally had Directgov which is their government order one 2011 they did this huge audit and worked out they spent a billion dollars on IT infrastructure. And they kind of not just websites but IT across the billion pounds right. They, what they did is they said okay we're going to have one CMS and it's going to be one website so they actually what they did is they took hundreds of government websites all the content from those government website and put them on to one purpose built CMS which was built in Ruby I think they built it themselves. And so they don't have a template that they roll out again and again again for different agencies. They have just one website with all the content in and the clever thing they did is they split out these services so anything that's basically app related or form related or a bit more complicated needs data runs on its own servers separate from the main government website. Where you have a button that kind of clicks off it still looks like it has a whole design guide to make it look exactly like, like the thing but it will run on different servers and so on, and can be written in anything they don't they don't mandate what technology is written in. They just mandate what it looks like and that's accessibility and so on. So there's a bunch of ideas for blown maybe this this idea of making a distribution just for your particular government I think is a really good idea. Okay. Done. Okay, okay, thank you. If you would stop sharing then we can get on with the next talk, which is going to be your harness or no sorry that's the next one it's going to be Mike first Mike they're stopping with the current state of mockup. Take it away Mike. Mike are you there. I can't hear you. Yes. I can't hear you. I didn't hear you Mike. You should probably unmute yourself. It is difficult technology I know man and technology never a good combination, but there is a button down at the lower left that says mute and unmute. He's gone. Okay, so we will switch order then. I don't know if. Oh, there he is. Maybe Eric if you can let him in. We will give it one more try before we move on to the next contestant this is almost like squid game. We have remote buzzing equipment in place. Mike, try again and unmute yourself. Yes. Yeah, I wasn't working wasn't working. You can hear me now. Yes. Okay, I will just go ahead because there's a lag. Yeah, you should close loud swarms that, because we're hearing a little bit in your background. Okay. I just want to give the current state of the six development, which was going on the over the last year and a bit more. The first thing is the pattern slip, which is partly used in blown the migration there was finished a while ago, mostly by Johannes Reagan. We also replaced some of the mockup patterns with alternatives from patterns live because they are better maintained and do a better job in some cases. Mockup itself, which is heavily used in blown. Most of the patterns are already finished. We have just a few patterns left where it needs a bit of work. The blown classic UI front end works. Basically, the only issues which are left on some control panels. We will continue on the weekend. And sprint and also. Yeah, in the coming weeks. This is a short overview. So most of the blue checkmarks are at least functioning. Some need a bit more tests. But we have a couple of patterns where we need to work on could be also that one or two patterns are removed completely because they're not directly used in blown and that's then not our business. Yeah, things will get easier. So we will have no more require just yelling at you. Addons can provide require and ship any JavaScript module they need. We will only load everything runs. Thanks to that peg module Federation. I'm just will give a bit more detail in the light and talk after that and also I can recommend his his talk about the tip tab. He also gave some insights there. Yeah, we have a bit work still to do, but we try to finish this up this year so that we can merge. Yeah, soon, the six branch into the master. You can give it a try. You can just try out the build out core dev just use the flip file you see there. And there's also some documentation just ping us in this court. We hang around there regularly. It would be nice to have some people also joining us. Testing is really appreciated as early as possible. And also, yeah, some hands on deck would be nice. It's just a few people who spend a lot of time but and recently we got a bit more help, but it's never too much. So that's it from my side. So yeah, I give the mic to understand. Thank you very much, Mike, and to continue on this topic. We are switching right over to his partner in crime, which is your honor's ragam who will tell what sounds like the next science fiction series, but actually is something completely else. So in this talk, he will tell us all about webpack module Federation. Take it away you honest. muted muted unmute yourself. Okay, same problem. You can, can you hear me. It works. Good. So, skin sharing is hopefully also visible. I show you something about this webpack module Federation, and how we want to integrate JavaScript models in clone in the future. So, the goal of including JavaScript is that we want to have at ons for clone without the need of recombining them, the JavaScript stack there. So if you don't want to have code application, or if we have one, then it should be just just very little. Then the bundle size should be small. And we want to have a good performance of course. This was hard to achieve with clone five and the quiet dress their version options with this stop modules option in the bundle configuration and the webpack module Federation was not very flexible. And there's a new concept webpack module Federation or module Federation because it's actually not bound to webpack itself. But we are now using that backing in mockup and better sleep. And this allows us to have separate bundles where which can have dependencies between each other. And we can, for example, define shared libraries, like jQuery should be just used one instance for all of the dependence, all of the bundles which depend on jQuery, and not every, not every bundle jQuery. And you can configure this module Federation thing. A lot for, for example, you can define inputs and exports, which should be the mode included. You want to provide to other bundles. And if those external experts are not available, there should be a foreback to their own dependency. And over here it looks like that. Here we have two bundles, which are used by the browser and both both use pattern sleepers and dependency, but only one of those is downloaded even if you have in the bundles different versions of patterns that you can configure which version you need as a minimum requirement. And then it's more or less automatically figured out for you. Both bundles have a complete set of all the JavaScript dependencies in it, but the browser only downloads what is actually needed. You can the configuration is not so hard. It's basically this you import the matter module Federation plugin and configure it like here. I have defined here pattern sleepers and dependency which should be used as a single because you just want to have it instantiate it once also jQuery. And you have to slightly adopt your entry points the JavaScript file with which is loaded. And then you open the web page the one which you include in the script stack. And this looks like that's this index, not chasing but chess. You actually import another file where all your other dependencies are defined you only have this single one input in here. And that makes web packs split out all the code to separate JavaScript files. And the single one anti point is then just a few kilobytes big like 10 or 20 or so. The point it's not only this input which is added by web back web back adds more code so that it can find or the other modules which is actually depending on. So that's the approach and we have to still experiment with it and and test it more deeply. But the first experiments looked quite promising and yeah I'm looking for this with that we can allow it on to install JavaScript without the company. Thank you. Thank you very much, Johannes. It looks interesting although the scariest two words, well, some of the scariest two words in my book are almost automated, but we'll see where this goes. And next, it is my great honor and pleasure to welcome back a voice that we haven't heard for a while in the plan community. And it is also known as rarer of llamas. It is, I give you the uncooperable David league with snow fakery. Yeah, and I'll be back in just a minute I need to restart zoom to be able to share my screen unfortunately. I'm looking on that. Otherwise just show us Lama's we're happy. And he's back. And he's muted. Hopefully this will work now. Yeah, good to be back here with the plan community it's been a bit of time since I've gotten to see you all and since it's probably been like a few years since I run build out. But I wanted to show a different open source tool in Python that a colleague of mine named Paul press God created. And it's a tool called snow fakery for generating fake data. So if you saw the talks from Philip and Fred earlier today you learned about collective export import which is this nice new tool for dumping data out from one site and importing it into another. So figure is a little bit different use case. Either you're doing development or testing and you need to get realistic data but not real production data into your system. Or maybe you're doing performance testing and you want to do that same thing but like at a really big scale load lots of data. And we had like a bunch of Python scripts that we were using to do this sort of thing and just like looping through a bunch of imperative code. And we realized we'd like to have something declarative. So I'm going to try to do a live demo. And this thing that uses YAML. So I'm sure some people will yell at me about that. But basically you write this recipe in the animal. And so this is saying we want to create three folders. Each folder has some fields there's a title, which is going to come from this is from the Python faker library so we're going to generate a fake country name. And we are going to use this expression here to like normalize that into an ID format. We're going to set the parent to the root of the clone site. And then each of these folders is going to have some objects which are friends that get created in conjunction so we're going to create a page. And create somewhere between one and five pages for each folder. And we'll set the parent appropriately to the ID of the folder that was created above. And then we'll generate a fake company name. So I'm just going to run this here on the command line. It supports multiple output formats. So by default it will just print the standard out this data. So we can see we've got three folders and some some documents for each one. I can also output that to Jason. This will generate a different set. And if I look at that here we can see this is in Jason but it's not the right format for importing the clone it's got like table and set of type and things like that so I'm going to use a modified output format here that I did. And now we've got something that looks like what we could import into the REST API. And now I'm going to hop over to clone and I just fired up on six using the Docker image which is really great it got it going in a couple minutes. It's got collective export import installed so I'm going to browse for my file. And then I'll just hit that and hit import. And then if the demo gods are with me. Another second here. There we've got our folders and some. Oh that's fine we got Steven Mann represented here. Yeah so that's that's no fakery and then see if I can do anything else with my time. I like modified output thing it's basically just making sure that we have all the fields that are required to do the import. I can change a locale here to German and because faker supports different locales. We'll get our fake data in German. Let's see Paraguay let's say yeah there so like Jordan is in German now and we're generating some German company names. So yeah that's no fakery it's open source. You can use it standalone and then you know maybe there's a little bit of work needs to do to make it easier to use with the clone but I just set this up this morning so work pretty well. Thank you David and might I add if you just add some restrictions so that the countries are actually tax havens and you generate fake company names this is a business model for every start up. Just restricted to the Virgin Islands and whatever and register all of them. But then again, yeah I know you're not that kind of person. Next up we have Philip our who is going to do one small VST code trick knowing him it's not going to be small it's not going to be one trick but we'll see Philip take it away. Thank you Paul. It's actually a really small trick I hope you can hear me. I'm unmuted and my videos on so I think all everything is good otherwise just yell at me. And during this week's this year's master in clone training for the first time I didn't use two different browsers, two different editors for Volto development and plan development I used VS code for both. And that is because of one small thing that I figured out. So this is a project, older project. And Catherine will know what this is probably if you listen to this. So it was a project from this spring I think and I haven't used VS code at that time. So, when I have an editor and I want to integrate the whole Python thingies. There's, it's always pretty daunting to, if I try to solve a setup the, okay, Python, Python stuff, the whole Python stuff so that actually looks fine. So, this hasn't been included in my new new setup yet because it's not that big. So I open it for the, for the basically very first time is just default, our default build odds get some code. So I hope I haven't moved anything just because I moved my mouse. So yeah, here's some whatever code let's look at some code there is a module thingy and here are all these imports. And after a while, it realizes that. Yeah, everything is curly underlined curly in the curly way because I already have the, the excellent extension, Python extension and pilands the language server which is the default in current VS code. But it doesn't know where plant xergy content important item come from so I'd have to search for that and I use my omlet to do that and stuff or packages the folder. So that is super annoying because I really want to. I hate these lines. I want to get rid of them. There is one very, very simple fix for that because probably all of you. I also have something in my bin folder because my build out is configured like that and it's called so pie. And that's it. So, here is your Python executable the default is picked for a reason that you can look up a lot of documentation and the easiest way to just solve your problem forever is just to copy the path for so pie click here scroll enter interpreter path, say enter. And after a second or two or three or four. I can just pass the right one. Maybe I close the file again and open it again and the screen sharing always takes a toll on the processor. As I said my new, my new laptop hasn't arrived yet. Yeah, it's not curly anymore so all the imports are found I can go go to definition. And there is my simple term. So that's it. No more build out recipes because OP is actually something that you need probably for every project. And your VS code is configured to work perfectly with the Python project in plan because all the packages are in there. So that is a really cheap trick, which gets you set up for Python projects in VS code so it's actually really small, nothing big. Thank you very much. It was a absolute pleasure to have had the last slot of lightning talks. No, you don't. I don't have that shit. When I say something. Do I have a second more. I'll just pour 48, 48 seconds. I'll use them to pour some scotch and raise a toast to all of you in the lovely plume community. Thank you for giving me the life that I have. Have a great evening. Bye. And the last talk, the last lightning talk after which I will have a final very short announcements. But the final talk live from here, where here is defined as the Sorrento Plume Conference fan zone is by Eric Andre, who will be talking about new Docker images, shiny things. Take it away. Okay, let's go. I'm controlling everything now here. So I hope it works. Yeah, we love containers. Okay. Okay, I love you need to stop sharing. Yay, we love containers. Okay, we always loved containers. Most companies that we've been talking are using containers some way or another. Of course, some of us had a love and hate relationship, especially with Docker. And we are getting over it. Maybe Docker will get over it as well but so far so good. We have a new generation of blown Docker images they are not yet officially support yet. We hope to fix that until the end of the this conference to have everything documented. We have blown back end image supporting already five to six and six zero alpha one. We have a plume front end image using confusingly vote of 14 and node 14. Also, we have a plume zero image for those of you that still runs you because the shiny new thing and we like and new things is not so shiny, not so new but real storage. And we have a blown AJ proxy one that already come with some nice things thanks to EA for donating that to us. I need to install my own products. How do I do it. I start with the default image run people install add your add on here, and then use the placate legacy resolver because even though Maurits is super productive, the guys in a maintaining people did not press the merge button and release. I hope this is fixed soon, or I hope they already did and I'm just saying, saying old news here. But as soon as you do this, you start the new image, your plugins there. So it's super old one important thing. All these images are based on people. So there's one trick you need to declare dependency so you do I include on your configs the same amount include and the name of the other package you use in your installation I was looking at some of the most popular add-ons they already have that so it's not a big deal is just a matter of keep keep doing that. We support environmental variable so I want to set up like a stack with zeal. Basically past environmental variable zeal address and tada. I want to support real storage with postgres. We are not yet supporting my sequel or Oracle in the main image but you do health storage DSN past the value and tada. So more examples with Docker compose available in GitHub.com slash plunge slash plunge plunge front ends. Read me using Docker compose we will keep adding more there. That's interesting because this is the complete setup for you to have long sex with voters shiny working out of the box. So I want to talk to you by the lovely installer steam. I would like to to ask for applause for all of them. Yes, I'll in civil tomatoes. Steve, because Steve is our BDFL. Okay, it's hard they gave me a boost during my last talk so I'm kind of slow now, and I'm part of the steam for now. That was it. Thank you all.
- Holzer: My first Plone 6 site, - McFadden: Web developer confessions, - Casali: Plone sponsorships, - Jay: Why Drupal won down under, - Derstappen: Current state of Mockup, - Raggam: Javascript integration, - Glick: Snowfakery, - Bauer: One small VS Code trick, - Andrei: New Docker images
10.5446/56652 (DOI)
Hello, welcome back to track one of the clone conference. Our third talk today is from to barrier giving us his fourth talk for the week and after also doing a training over the weekend. So thank you to barrier for all of the everything that you've been showing us this week. Barrio has been using zoaps since 2003 and switched to clone after trying to build his own CMS on top of zoaps is currently doing a lot of work with Volto. So right now to barrier is going to talk about a search. So go ahead. Thank you, Chrissy. Hello, everybody. And let me start my presentation. Do I get the confirmation that this presentation works? Yes, it is there. Okay, cool. Okay, so. Yeah, we're going to talk today about the search which is a bunch of services packages libraries and so on and together may provide advanced search services. And as part of those packages, we also have a photo integrate photo integration. So we are on topic, let's say it started development this year in April and we, we, it was developed by me by Zoltan Sabo and it's a visual. These are my developers. And to talk a little bit about the history of this search library. It's coming from the EAA. When it comes to an organization such as EAA, it has a lot of specialized content and a good search integration. It becomes a high priority. All right. So we have seen development over the years of several search products at EAA and many of them are still in use. And we all know EAA faceted navigation. It is complex and somewhat complete, I would say, search solution for Chrome websites. It has advantages, the fact that it's easy to set up. You install just an add-on, you put it in blown and more or less easy to set up, right? I mean, not as easy as, not as difficult as another external service. But as with these advantages, it has limited full text search capabilities as it uses the Zolt catalog. And of course, you can only search blown content and with the EAA projects, we also have to search across websites, across many other data sources, and the Zolt catalog becomes a limitation. And the EAA search server, but it's another product that EAA has. And it is providing a search integration with Elasticsearch. And the front end is implemented with jQuery. It is the current, let's say default implementation for the search. For example, the EAA global search, which is the main EAA search service, is still using it. And we are actively working to replace this search service. And that's because it has a few drawbacks, which I'll explain. So yeah, we have this EAA global search. It indexes many websites. It is based on a process of ping initiation from that website. So in-blown, brand new, change of content or when you delete it, it has an Async worker that pins a central repository. And that central repository will then go to our blown website and using RDF Marshaller. It will grab the data, the content from blown, and it will index it in a VIT was a database. And from that point, we use a Sparkle endpoint and we can download the data and put it in Elasticsearch. So all of this, yeah, it's a little bit convoluted. And the biggest downside is that being based on jQuery, and it's hard to develop. And because the framework is too big, only one guy, which is so fun, knows how to deal with that framework because he's the main developer for that one. So yeah, we want something else right now. But this search service has been already implemented in some websites, for example, Climate Adapt, the forest information system in Europe, and this is a Voto website actually. And like, it just highlights one of the drawbacks of the old jQuery implementation that we have, Voto website, which is running on React. And basically we're integrating it somehow because it's a more or less a fake integration just, we've just added the look and feel of the forest website to the search server deployment in this case to make it appear that's integrated. But there are a thousand drawbacks with this. So the ideal scenario is somewhere where we have this search, but fully integrated in the website, like fully running as part of the JavaScript bundles as part of the code system of that website. So that it doesn't require external templates, it doesn't require special snippets of HTML, but just integrate header and footer and so on for a website. There is another solution, let's say, from EA, and this is deprecated. It's something that I've developed last year as part of a by-sever by the forest information system for Europe. And that was actually the first integration of Voto with Elasticsearch that I've worked on. And it's, I've done it that way because I knew that integrating the jQuery based solution would be hard, right? So that I've worked with the Voto search kit, sorry, with search kit, which is an external third-party library, but is based on React and integrates with Elasticsearch. So, but I want to mention not just the Voto search kit, but also there's another product from the Voto community. I think it's called Voto search kit, the same, but it's made by Katia. I don't actually know the status of that one. But this one, in case you're just interested on very fast creating Voto integration with Elasticsearch, you can take it as an example. And during this development process, we've quickly understood that there is one little thing and trick that we can do with Voto that can help us with the development process and also with the deployment. And that is to create middlewares that will proxy the Elasticsearch backends to the frontend, right? And that is because Voto is running on top of Express.js, which is a Node.js HTTP server. And that Node.js HTTP server supports extensions, middlewares as part of this project. We've actually gone into Voto and improved that part, split up, made it not possible to create middlewares from both Anos and so on. Yeah. So right now we've arrived at the EA Search Kit, which is supposed to be about one search appliance to rule them all, right? So the idea is to create a library on top of React, and it can be used to build Elasticsearch powered search engines. And we will also add, because it's crazy and it's the time, we will also add semantic search capabilities and NLP processing. So Searchlib is the internal name. It represents a bunch of actions, services, workflows, and we will take a look at them. As part of developing this Searchlib, we will also modernize the UI. Right now we have many implementations and we start this, for example, with re-implementing the UI that is provided by EA Global Search. But now we are in the process of defining or and refining how that search UI would look like. And before being implemented in JQuery, it was almost impossible to take the task of refining that UI. Right? So and yeah, it's an outdated technology. There was no point in even trying to improve things there. And it's a big architecture. We have a lot of components, more or less, but they are all open source. We have Docker images. We have documentation in principle. It is possible that you take this project and you can run it on your own. And you already have other companies that will or are in the process of taking this package and services and developing for non-EA websites. And they've reported that things are good and yeah, they've had success in bootstrapping and having it running on their websites. Okay, so there is the Searchlib main library, which is the React library. It provides elastic search, request integration UI and so on. We have Searchlib Global Search, for example, which is a separate package that provides just configuration, Searchlib less, middleware and Voto Searchlib as a separate add-on, Voto add-on. And we already have Voto Global Search as a deployment. We have the Global Search standalone, which can be used to test outside of Voto. And the catalog of measures, which is an actual deployment of this Searchlib library, and it's running on classic Plong 5. So there's a huge advantage of being able to develop in React and being able to have this modern workflow and web work pipeline, for example, that we can actually integrate a lot faster and easier with any deployment target. And in the back end, we have Elasticsearch as the database and the Search Engine. We have an NLP server, which is a standalone service, and I'm going to explain exactly what it does. And we use Apache Airflow as a task runner and to run a harm server. So we choose Apache Airflow as a task runner. It has pretty wide community backing, right? It is backed by Apache Foundation. There's a lot of packages that will integrate with it. And so it's not we, it's using Airflow for, sorry, using Airflow for as a harvester. It's not the most straightforward thing to do because it has this configuration of DAGs that are not really dynamic. But we managed to achieve that. And if anyone tries to do the same thing, there is knowledge in this community and we can help with that. So I was mentioning the configuration scripts of Apaches. They are, let's say, the equivalent of workflows. Apache Airflow calls them DAGs, directed at typical DAGs. Graphs, sorry. And they cannot be fully dynamic. So we change them. Basically, we have a DAG that can trigger another DAG. And that is how we build the harvester because we will have, for example, one DAG which will generate a list of links and then it will trigger another DAG for each one of those links. Apaches, sorry, Airflow is a wonderful tool. And I mean, we didn't choose it just because it can run tasks, but also because you can manually trigger tasks and DAGs. And you can also monitor very well what happens. There are dynamic, not dynamic, sorry, there are pools that we can assign to each website, for example, and to each type of task. So it provides a lot of control when we will scale this harvester. We are seeing your presenter notes as a second window on top of your presentation. Okay, sorry. But I don't know if it's embarrassing or not, but I will try to fix that. Okay, that's better. Okay. Okay, so what else do we have? We have this DAG that can create that will stash documents in the LASC search and it will then trigger a process to process those documents with the NLP server. And I was talking about the logging of tasks and of jobs in Airflow and they look like that. And this is one of the main advantages of using Airflow for such a task. Such a, let's say, yeah. Okay, so now we have the NLP server, which I think it's the third in line of NLP services that I've developed. And right now it's running on Python and FastAPI. It is in GitHub on the NLP server. We use another library called Haystack. And that one provides the most important functionality, which is the question and answering models, but also some glue code, for example, the pipeline engines and so on. And at the base of this all we use the transformers in the Haystack interface as NLP library. Okay, so as NLP server capabilities, we have elastic search proxy, search results, re-ranking, question and answering, query classification, summarization, similarity, question generation, name, identity, recognition, extraction, zero shot classifier and text embedding. And if we have time in the presentation, I will also show it in action to see what exactly that looks like and what it can do. So this is an example of a pipeline and how it looks like in NLP server. Basically, we have these pipelines with nodes and each node in the pipeline has configuration and we use YML files to declare this pipeline. Now, the QA process is based on very recent work. It is models trained with a dwelling code framework and basically it is recent machine learning work. I will not try to explain it because it's quite complex. But the idea is that there are models that is open source code that can integrate those models and it is possible to use them quite easily, I would say, to create a question and answering search engine. There is a process where, so basically you have the document store and that can be something like a vector search engine like VDA or just a classic search engine like Elasticsearch. So when you type a question, some documents will be retrieved from the document store, let's say 20 of them, and they will be passed to the answer extraction model which will take a look at these answers and then just take a look at the documents and then just extract the answers from them. We don't extract, we don't pass a lot more documents to the reader because it is quite costly, the tokenization and everything else, it all needs to run on the GPU actually. So now the current status, we started working in April May 2021, we have a launch due next year, but as I mentioned we already have it deployed and it's used on non-EA projects from for example code syntax is working on that one. And we have the Vota integration, I have this screenshot of what the Vota setting screen sidebar looks like and of course we have a lot of UI improvements and NLP work ahead of us and I will try to give you a demonstration of this search engine because I have it running on my machine. So this is running in Vota. Okay we're not seeing it yet. Okay hold on a second I will try again. This Zoom screen sharing is always with problems, so search heap share. Okay, there we go. Okay cool. So this is this is the main page of the search engine and this is actually one of the challenges that we had because we are building a search engine and a semantic search engine but we still need strong metadata support like we need those filters that you see on the search engines and basically you have to be able to provide metadata filters like yeah let's say the content type that I'm showing you here and all of this basically needs to be indexed in Elasticsearch and associated with the metadata document and that also creates a problem that for example the NLP models they prefer text that is they prefer text that is really short for example 500 characters or such so we have to run basically two indexes we have to run one that is the raw information and another one that's prepared where each document is split into multiple potential documents and so on so it complicates the infrastructure but it's fine. So we have we have facets left and right and the most important thing that we can do and it is quite nice is the fact that we can ask questions for example what is ghg and there will be a little bit of waiting but then we get the direct answer and then we can ask some of the questions like who is the director of EEA but this one actually just highlights to a potential problem but you can have with this type of search engine which is the fact that that this information is very much dependent on freshness so not only you know you cannot rely just on the NLP models to tell you the answer because given the wrong documents in the answer extraction reader it will basically generate a right answer but maybe it was right like 10 years ago right and it's not current anymore and that is because of questions that are also time dependent and they depend on the information that we cannot provide. Yeah so of course there's all the niceties in this search engines like title the script I mean sorting and various views and so on and actually working on this search engine inspired me to work on the on the current search block in Polto so yeah we have this one to thank for. What else we have we have the NLP server? We're missing the screen sharing again. Sorry? So we're missing the screen sharing again. Okay okay let me try again. Actually I'm gonna just share my whole screen just because it will be easier and okay come on go. Okay so actually okay let's start from the beginning. This is the main interface for the search server it lists all the deploy let's say pipelines currently in the system and it is possible to start this search server with just one or two pipelines so that you can create customized deployments of this or if you don't need certain models we're not loaded in memory and especially GPU memory so we have things like search qa and similarity and so on and so on which I've already enumerated but if we go to sorry if we go through the API then we have the open API interface where we can actually interact with the NLP servers and we can do stuff like for example and I will bump my my yeah zoom so the this server already comes with some text as the default so that it can be quickly tried like this so if we provide this document and we ask it to execute then we get we get sorry we get here summarized and I find it quite interesting but it says I mean it's amazing the state of the art that is right now with the NLP models and and what we can do and we're not even fine-tuned models right so for example if I search for joint report you will see that for example have published the joint report on the environment part it's nowhere so the joint report is here this joint report so actually the summarizer model abstracted and generate the new text from from the provided input I like this one for example the zero shot classifier so given some text and some labels it cannot tell you which one it is so we have a candidate labels we have and this this is a model that didn't see any specific training so it's not a categorization model where you have some text and then train it for those labels it can just tell you directly that for example it has it is about water marine environment and transport that text that I provided and that could be used for example to auto suggest tags based on already existing tags in the system and so on and we're probably going to develop integration for that one yeah pretty classifier it's an important one it we use it in already in the search and that's because it can tell us if we're dealing with some keywords a question or a statement so that we can decide if we should run the QA or not we have the similarity and that one can tell you for example how similar is one sentence to another I like this one the question generation because it opens a lot of possibilities so given some text like this and if this one will take a little bit of time because it's not optimized for GPU I have to do that somewhere to do least so it will it will generate new questions that where the answer lies in the text so for example it generated this type of question how many accidental oil spills right and it also knows which is the answer and and so on and so on and it even has questions where there are multiple choices and this is this is recent work and it's it's all open source all available so so if you're trying to build something similar you can use the NLT service for this one or you can yeah build your own what is interesting that we have here yeah main entity extraction summarizer which I've mentioned and I think that's it that's what I have right now activated well this is it I'm waiting for you in the GC if you ask questions or ideas and so on you can just ask me anything thank you for watching and thank you again to Berio for presenting I'm sure it'd be nice to be done for the week presenting now so everyone go ahead and join in in jitsey I've put the link in Slack and we'll see you later thank you
Searchlib is an extensible library and associated applications that integrates ElasticSearch and NLP technologies to build an improve faceted search interface for data catalogues.
10.5446/56654 (DOI)
Hi, welcome back to Track 2. I'm with Ramon Navarrabash, who is the CTO of ISCRA. He's going to talk to us about Guillettina and the data layer, so FlapsDB, which is also a storage layer, and some more information about Flaps. Ramon is a longtime member of the Plum Foundation and all around Great Guy. He's also the co-author of Guillettina. So Ramon, whenever you're ready, take it away. Thank you so much, Andy. Well, first, welcome everybody. Here, I'm mostly as a CTO of Flaps. That's my new company and super amazing project. And well, I'm going to start to explain a mystery from, like a dream mystery that I've been kind of pursuing for the last years. And that finally, I think, I'm on my commentary right now. So in order to explain this, I need to go from where we started Guillettina to where I am now. And the end of the history, so it's going to talk about building an AI power search engine for an restructured data. But first, let's start with Guillettina. Yeah, thank you very much. Yeah, Guillettina was born in 2017 and was born with Nathan that we were trying to build a more scalable framework than was blown back then. And so, because we were dealing with the problem of needing to store thousands of millions of objects and needing to search in them. So we needed to build something more reliable, more scalable, and that it's also backed by as Nesquale database, so it's easier for operations to scale up. So we made, we developed Guillettina based on the designs of the clone and so, and then we decided to push it to the production up to the maximum kind of speed as possible. In this spot, we needed to develop a lot of add-ons and Neil just explained that some of them, like the Elasticsearch1, which provides the option to serialize into Elasticsearch, a lot of search information, and provide really fast results with keyword search and VM25 search. Then the file storage, PAPSA, cache distribution, I think that the cache implementation on Guillettina is super cool and provides super nice scalability history. Even we have a simple user database, so clone people could map the user API and registration. That was 2019. Okay, on 2020, we decided to implement the ASGY framework to follow the ASGY protocol, LDAP, we know that clone people love their lab, so we decided to implement that plugin, Stripe, that also Neil explained it, and a lot of other plugins to provide easy-to-use integrations onto Guillettina. Nowadays, we are 2021, we have version 640 release candidate on the on the pipeline already, and we are, I'm so happy because we are already on the front end working with Guillettina React to do easy interfaces for Guillettina, and we are supporting Python 3.10 with the latest ASINCA API. So super amazing evolution from Guillettina, and the result is what we have nowadays in the conference. Two trainings, two talks from new DAPs doing Guillettina on the conference. I'm super happy about five companies using it in productions with lots of usage and a really good feedback that I'm receiving from them. We have a long path to do with the front end, Eric is going to talk about it, Brigitte is going to talk about it, and I'm super happy to see all these front end histories on top of Guillettina. We need to name the interface, we need CMS, maybe with Google.io, maybe with Volto. We don't know. We want to grow the Guillettina framework team that we have right now. Right now we are only two people that are maintaining and doing meetings. If anybody wants to get involved and help on to contributing on the Guillettina core team, we are more than welcome to accept, to welcome them on to the meetings that we do every three or four weeks. And everybody is asking me, well, Guillettina is going to be blown, I don't know, nine back end? We don't know. And it's not our goal to be blown back end. We are super happy about what we have right now. It's useful to build projects to develop fast APIs that are deployed with no time, and it's super, we are really happy with what we have right now. We will become the back end front blown, and Voltox wants to do an integration with Guillettina. That will be super awesome. It will be maybe a good history, but we will see in the future. So I've been talking about Guillettina, but I also have been kind of talking a lot during the last conferences about other things. I've explained last year on the conference that on 2018, there was a revolution on NLP, and BERT was a language model, but kind of revolution the way that we are able to convert language onto vectors and being able to do semantic search on the information. Then in 2019, we did a Lutina Spring on Toulouse. It was an amazing spring hosted by Eric. And I remember that we were there and we say, we need to have a better back end storage layer that supports full tech search, and that it's not elastic search, because elastic search, it's a delegated writing search experience that is super costly and it's super difficult to use. And then on 2020, I was kind of thinking on my bones, they're missing something. We need to build a cloud service that is designed for people who manage information, dexel information or file information, that it's able to provide a state of the art NLP search, democratizing the technology that nowadays it's only on Google and Facebook and these kind of companies that has the power to develop these models and to push these models into production. So that's why we decided to create Flaps. Flaps, it's our new brand-new startup, super amazing. And now I'm going to, sorry, to explain about a bit about Flaps, and then I'm going to justify why I'm explaining what's Flaps here. Flaps was created in order to be an easy to use API to build AI powered search engines for an instructor data. That what does it mean? It means that for example, for blown can be a storage layer where you can store, even if you want your main data or if you want your indexing information and being able to provide with low code with a web component that you plug in your system or an SDK, a way to guide your users and to find any information instead of providing the standard keyboard search experience that we nowadays we are able, even if it's through Z catalog or it's through elastic search. So what do we do at Flaps? Flaps understands whatever information you are pushing onto the system and converts this and extracted information onto searchable information. We ingest all kinds of data. So we ingest conversations like Slack conversations or Google chat conversations or whatever, files, URLs with information, abstract or semi-structured information like JSON XML. We convert all this information onto text, paragraphs, entities, summaries, thumbnails, previews, relations and semantic vectors. It's a lot of processing engine here, processing power. And then finally, we provide tools for ranking, tuning and allowing to define if you have a specific rules or specific meanings that you want to push, specific concepts that you want to push in your research, which is the goal that everybody finds information, that all the clients of our products are able to build this information. So which is the steps? Creating, extracting data as a service so we are able to extract what possible information from a file. We felt the need of installing any kind of software and malware systems. Just as a service, you push data and get everything that's extracted. Providing a way that it's scalable to index and store this information, improve the search experience. We don't want that people do that in navigation. I really think that navigation is maybe give prostration because you are going through the tree. Maybe it's that path, maybe you have a passive navigation and you need to search if it's this topic or this other topic. Finding information is a nightmare. And we want that, we really think that navigation, it's not the way you should use search. We are used to Google. Google provides as a search widget, which is super powerful and we find information. So why we cannot have this power onto our internal or intranet or our own websites that is democratized so everybody can use it. For that, we need semantic search. That means finding information through vectors and finding information with multilingual. So you may be, your content, maybe it's in English, but maybe you're going to speak English properly to find what you're looking for. So this is the means to be able to search information that it's in a language. And here it's where I like it because I love open source, I love the community. And I think that democratization of artificial intelligence, making it easy to use, providing tools so people can use it and provide value to their clients is the key of what we are building right now. And finally, open source, open source from it's, I do open source, I think, all my life. And I really believe that it's the way to provide value and to give clearness about what we are doing. And it also provides governance, what it means that you can have your own installation and own your own data and your own device or your own cloud without the need that anybody's controlling and making money with your data. So finally, as a summary, I'm not going to be more professional, I promise. Flaps is an end to end, low code API. It means that we want that everybody's focus on what they bring to value, the tool that they are building. They don't need to worry about if it's an inverted index, if we are storing the keyword or we are doing a BM25. It needs to be easy and it's to be end to end. That can be integrated in minutes because it's not we want to provide this value to you so you can push your value on top of that. And scalable. We have use cases that people have millions of documents and it needs to have the same speed on answering that people who has 100 or 1000 documents. And with other governance and open source is a core value on the system. So we can provide storage of these documents. So in order to show you what I've been talking and explaining, we are going to do a really short demo about a UI. We develop it just for the purpose of demoing what we are doing. We remember we are an API and our goal is not to do the final problem. It's that we provide tools so everybody who needs to build something that has the need to find information can use that. For example, here you will see a pdf, a 68 pages pdf with a lot of text that you push onto the API, this pdf. And the result of pushing onto the API, it's an extraction of all the paragraphs that you have on the document that we already make sure that it's as much as possible close to paragraph. So you can search for any concept on the paragraph. For example, we can search player and it gives us the paragraph that has the player meaning. That's cool. But we can also search Google, for example, and then we can find paragraphs that has both words on the sentence. That's quite easy. We can pick on it and as we know the position of this paragraph, we can really see the paragraph on the system. This is all on the API that all this information is on the API and you can see. We also are able to extract all the possible locations that appears on all the text so you can search for them and all the possible people on this API. Okay. But this is not magic. This is just extracting all possible information. But once it's extracted, then you can search for something. Really imagine that we can search for the best football player in the world. And sorry for the fans of football. We know that yesterday Barcelona had also kind of a problem there. But we can see that, for example, our engineer is able to understand that we are searching for message. And in English, here you can see that it's providing as a semantic result saying that it's messy what we are looking for. But it's not only in English. We can see another article that it's written in Spanish that gives us the same kind of information because as vectors doesn't understand about which language are we talking, then we are able to search across languages on the same vectors. And this gives the information. But that's not only on the end of the history. We can search about any term, for example, Revolución França in Spanish. And we can find a piece of the of a paragraph of a video that talks about Revolución França in French. I am in English. So you can click on the Revolución França. You can see the paragraph and you can go exactly to the second and the new France that is talking about Revolución França. I'm not going to play more about the product itself. Just to see what our API is able to do and what we are providing on top of this API. So you can build this experience on your own products, your own CMS, your own internet or whatever you want that is going to be interesting. So we've been working hard to have this system and we developed an architecture based on the data. This is what you can do with the data. Build an easy MVP that it's super useful to show with the power of what you're going to build. This architecture is super simple. It just has the data in the middle, storing all information, all extracted information, everything that you've seen seen. And providing just an index that has a lot of semantic machine learning inside a training system to train the models and processing engine that is able to understand everything. This system that is what you've seen now, it's built on top of the Lutino provided us the opportunity to build fast and to deploy fast and to deliver value super fast. And this is one of the key values of Lutino. What happened is that we are growing as all the companies and now we are moving to a different architecture where we have much more options to scale the team that we have. So everybody has more responsibilities. Let me just check. So I'm going to explain you a bit which is the architecture of Flaps, the final version. First, in one side, we have a processing API. An API where you can send files, you can send any kind of information, and you can push all possible connectors through an application that's called Flaps Desktop where you can connect to Dropbox or any integration that you would like. It's an open source application that we will be open sourcing by the beginning of next year. What you will be able to implement your own integration, I don't know, with blown, with anything, just connects to the source, don't move into your system or pushes into Flaps API to process any kind of information. Besides this application, as I said, we are going to open source an internal kernel prediction. We will also offer an SDK and a REST API with full documentation how to use it on the API side. On the other side, this is where things come in, where we are going to search for information. We are also going to open source the SDK UI. SDK UI is what you've seen on the demo. It's a web component written in Angular. Sorry, Timo, I know it's in Angular. But we developed it on Angular because I feel more comfortable on Angular. Sorry. We developed this web component so it's easy to integrate on any website. We can search for any information that is stored on your bucket of information. In one side, we have the processing, the ingesting information. On the other side, we have the search. We gather all this information. What's in the middle? I'm going to open this box. The first thing is the extractor understanding. That means that we collect any piece of information of the file that may be useful, even from a metadata, from a text line, an image that we are converting into a vector, any piece of information that can be useful to find that information on the overall sea of all your information, it's going to be extracted and prepared to be stored. Here is where I'm super proud to explain. We need to store this information in a way that we can be able to search later. That's where the key of this architecture is. This is the database that we designed to be an NLP database to store all this information, text, vectors, relations, paragraphs, previews, all the information that it's needed to provide this experience on search on a database that it's transactional and it's scalable. This database, we are working hard to open-source it. It's not easy, but we plan to do it by the end of November, beginning of December. I was willing to have it by today, but it's been not possible. We are working really hard to make it possible. We want to open-source it because we really think that everybody has the option to store their information on their own system and integrate on their own open-source applications. It's super important. Finally, the last piece is the machine learning piece. Once you have everything stored, we need to get all this data again and create our models. Some of them are in TensorFlow G. Yes, they are able to run on the fly-on without any need of having information on any server, like for example, query expansion, intent of detection, and a lot of other models like labeling, ranking that are that this one needs to be trained on our servers and we're providing an API stuff. In short words, what's Flabsdv? Flabsdv is the database that it's going to provide as a storage layer for an instructor data, open-source, designed to be fast and scalable, and providing VM25, paragraph, and vector information for being able to provide value on our own applications on top of that. The design is quite simple. It ingests information through a GRPC API where you can send resources or query information and it provides also a way to connect your streams of data with just stream of Kafka that we can provide transactionability. From the logic point of view, it's super simple that at the base is organizing knowledge boxes, that it's blocks of storage information, what in PLOM could be maybe a PLOM site or in LUTINA a container, and you can store as much as resources as you want inside the knowledge box. Each of these resources has a different kind of fields. Right now we are supporting keyword-based fields, text fields, link fields, file fields, layouts, and conversation fields. We aim to add more kind of files, fields on the resource schema, and as is the dynamic schema, you can define at any resource to push more fields or less fields without needing to pre-define the schema on each one of these resources. We are focusing on the fields that provide much more value to find that information in our system. For example, right now we are working hard on also adding the date field so we can provide range queries into the system. In general, if LabsDB is designed with a bunch of sub-systems that interconnect each other on the lower level, we have a plot storage that right now we support GCS and S3 and we aim to support Azure, FACIS, and MOSM. Then we have our nodes. The nodes are the indexing components that allows us to store vectors, specs, paragraphs, and relations in scalable ways through charts and distribution of data across all these nodes. Then we have the key value storage. Right now we're supporting the TITMP value, a really nice RAS database, and Redis, and the cache that we have, Memcache for storing objects and Redis for the PubSat. On top of this architecture, we have a reader which reads super fast and does the merge of all the queries to provide ranking results on searching by paragraph, by semantic search, and by relations. Then we have all the ingestioning components. These components reach from the stream or reach from an API in order to provide, serialize everything onto the proper places on the underlying database. As you can see, it's a mixture of RAS to provide fast results on the query and Python in order to provide flexibility on the writing side. The API is quite simple and I'm not going to extend here. We are going to publish all the swagger and open API documentation plus the developer portal soon. You can see it's super simple, creating a resource, updating and patching. It's not traversal. It's not traversal. I know. We decided not to make it traversal because knowledge boxes doesn't have security schema inside them. If you have access to knowledge boxes, you have access to everything that is inside the knowledge box. We did that because machine learning models don't understand all security. If you are training a model with all your data, that it's able to predict an X-Word, meaning that all the information that is distorted on that model has the same security concept. From the NLP concept, it's much more useful to provide knowledge box-driven models than resources specific with security for each resource. If you're planning to have something that each resource has a specific permission, then maybe the FlapsDB is not your solution right now. But if you are aiming to create knowledge boxes for your teams, for your company, for your internet, for your product or whatever, then it's a proper solution. And Guillotine. I think it's been a long that I've been talking about Guillotine and this talk. We remember that in 2019, we were at Toulouse Spring. We need the Guillotine object server. I'm so happy because this fulfills exactly what we needed in Guillotine to provide storage layer on the bottom of Guillotine to store everything and to store all the text information in a way that is able to be searched. So our roadmap right now is that we have already the MVP in Guillotine. We are going to open source FlapsDB by November, by end of November. We want to open the cloud sign up by December and publish the FlapsDest by the beginning of next year. And we are working hard also on machine learning to do an elimination and provide more machine learning amazing tools on top of that. So bonus, if you like FlapsDB and you really want to be part of our community and you want to start have early access onto the first versions that we are going to open source and open the sign up. We have these nice t-shirts designed by Ana Probeza over UI and UIT's Supermaster. And they are super nice. We have three flavors. So we are going to have a t-shirt like this for the first that signups on our system. You have it to our here to access our sign up registration for the Plum conference. And I think that's all if I'm not wrong. I give time to take the picture to the QR. I will go back to them. Just go to the next one. Yeah, it's really important. We are hiring. We are willing to grow really fast. So we need a lot of developers. We need RAS, Python developers, TypeScript developers. What's really important as you can imagine, distributed system, observability, Kubernetes. We are a full remote company. We organize it in the squats. And we have personal culture of our main culture. And we want to keep that because we are Catalan and we always will be Catalan. If you are interested, just send a mail to careers at FlapsDB. And I think that's now it's all. Oh, yeah. I think that that's all guys. I really hope that you enjoy it. Sorry for explaining to my students about FlapsDB. I think it's a super, I'm super proud about what we've been able to get. And I'm super willing. I'm super interested. I really want to be able to open source it soon. It's a lot of work that we are doing as far as we can. It's faster than we can. Thank you so much.
The talk will be separated on three parts: - Guillotina: At Flaps we developed all our data layer using Guillotina with an amazing result. We will explain our experience of how we used it and the pros and the cons of it. - FlapsDB: A new storage layer to allow AI search capabilities plus standard search. - Flaps: Our open source plan and how we want to help OSS projects like Plone and Guillotina to deliver more value.
10.5446/56656 (DOI)
Hello everybody, we are now going to listen to Ramiro Luz, a long time contributor in Plans, a long time member of the Plans community, and he's here to talk about recent changes in these professional lives. So thank you Ramiro for being with us, thank you for this talk, and it's all yours. Oh nice, thank you. Hi people, nice to meet you again, and I came missing all of you, it's hard to talk to a screen, but I will do my best here, okay? So I will talk about the longest project my life, the longest project my life was the project I did, kind of convincing policy of trying to get people using Plans for like 10 years, and then when it was delivered, I changed my career, I left the job ahead and started to work remotely for a multinational company, so it's a real challenge for my life, and also I changed cities, changed houses several times, so I will talk a little about this and all of that in the middle of the pandemic here in Brazil, okay? So I am like for those that don't know me, I am a software developer, and have a master degree in software engineering, I started developing for small software house, and now I'm like here, I was kind of going liking better, liking most, liking more to work in the back end, so I started, I am working now as a back end engineer, okay? I like to do my own beer, I am not doing that because of the changes in my life, I don't have a space right now, I am a surfer, so I'm living near the beach to surf, I like to ride my bike and use the bike as a transport, so I use mostly bikes, and sometimes I need to use cars also. I am a pet apartment, my pet calls Rosa, it's kind of rose in English, it's her name. I'm Brazilian, was born in Florianopolis, Santa Catarina, and I am living a little bit south of Florianopolis, it's in Bituba, near a beautiful place called Praia do Rosa. So my career review very quickly, I started my career working for a small software house, it was in the room of my friend's house, we have two employees, he, the boss, and me, then he hired his brother, so the two brothers and me as an employee. After that I started working outsourced, as a contractor for the federal education secretary of the United States, doing support and user support and some development for some small intern systems. After that I also worked outsourced in the Benestado Leasing Bank, I was working in a contract simulator system and did some support also. I started working public employer employment in town council of Curitiba after it's kind of contest, it's a selection, you have to pass, and then I worked there for 20 years. And during the pandemic I started working for Biosdev and outsourced to Pinterest. So what's the difference I faced that I felt during this change? I was working in a politics environment, it's a place where when there are elections or some changes, who is the president of the town council, we have changes in the administration, we have new ways of working, but the process is always very hard, very strict, and the corporation working process is stabilized and the TI working process was trying to change, like using agile methods and stuff, but the corporation work process is well-defined and is there for years and it's hard to change without changing laws or stuff like that. So it's very concrete, it's hard to change the process. The presence was registered by fingerprints, so we had to go there and put a finger in the machine that recognized our fingerprint and they know we are there for eight hours or so. We had the annual evaluation, so we had a talk, an ISOR that helped us evaluate ourselves and evaluate the process. And we have the stability, it's harder to be fired, so you have to do very bad things or your results are not... The thing is you have less ways of being fired, like you have to be very poor in your results to be fired. And that is one thing. And the other thing, this is the stability, it's forever, so it will be the same thing forever, it's hard to change or the change is very slow, it's very hard to change. In the business, in the other point, when I changed it, I went to a business environment where the results are very important. It's different for the political environment where you have to... Your results don't have the same metrics as in the business, like you have to deliver for people's satisfaction instead of money or client satisfaction in terms of usage of your application, it's different, it's very different. So the same working process is stabilized across the company and with the freedom for things to happen, that which means everybody knows how to work and the process is a company, but the teams have their own reality and our team, for example, for some time we were with three people present at work. So we did adaptations to our work meetings and schedule of meetings and size of meetings, depending on the size of the team. So one important thing, instead of proving you are there for some time or eight hours or something, it's important to answer things on time. When people ask you something, you are in your working hours, you have or it's better to answer as much as you can. So you can unblock some people or something like that. It's important to report progress, what you are doing, how you are going, what's the problem, and when you are blocked, why you are blocked and what is the plan to unblock. And it's important also to deliver. Deliver in two weeks what you expected to deliver. And if you cannot deliver what you expected, it's important for the team to know before that what's happening. So it's very important to communicate. We have constant, I had at least constant feedback and a follow up from buyer's dev. So the interest talk a bit, my project manager talk to me and say, oh, hi, how are you? We are doing good, but this aspect you can improve. So I have this feedback and I can improve instead of being like, oh, you are not working right, you are fired or something. You have a feedback positive feedback, like for improvement. And I have a follow up from buyer's dev that helps me to talk about how I'm doing things for our client. So it's another point of support. The environment, it's dynamic. The dynamic environment, it means like it's live, it's like have priorities well-defined and sometimes the priorities change and everybody is aware of this change and everybody's change in the same direction. So it's from what I had. Okay, that's what I can compare. So as I was at the employee, I started to work instead of a place where I had to work from home in the pandemic. It was not prepared or not planned to do that to a company that eats online. The process administrative forces and procedures are online. You have a system where you have your employee profile and a few things there and have contacts and have all the process with tools and communications are online. I have from buyer's dev, I frequently follow up. How are you going? Are your clients happy? How is your work? Are you doing good? Do you need something to work better? So this is the weekly follow up. And I have mentoring. So if I need more, if you, I feel like I need more help or more knowledge or more understanding in some technology, I can try to find a mentor that may help me to learn by myself. Go there and learn these and these and that. Okay. I work day in and day out with a distributed team. There are people in Japan or in other areas of the world that works for marketing. So sometimes I do support or or answer questions from people that are in Japan or in, there are parts of our team that are in Dublin, other parts are in San Francisco. So we have several people from around the world to work. So the process is of work. It's well-defined. We use tools to control our issues and search code and everything. And all the employees use these tools and workflows and communicate through Slack. And so it's a well-defined work process across the company. The process is defined but flexible. So we can adapt, adapt the frequency or size of meetings. So it's possible to have a similar, it keeps the work process defined but adapting to the team reality. One thing that I like most is the feedback. The feedback is focused on improvement. So where I can improve. It's, I really like that. And the other thing that I like to mention is Lensha and Learn. I have been seeing, like, talks about several technologies we use it by interest. During their lunchtime, it's about the middle afternoon for me. But I always have some minutes or 15 minutes, 30 minutes to watch some good presentation. It's very nice. But this is the bureaucratic talk and now the reality. During this changing, during this pandemic, I have a very hard social isolation. I didn't visit family, friends and stuff. I didn't go out, I didn't went to gym and I was working alone in my room. So it was very hard. And I started to feel like communicating to people. I started not communicating through Slack with the team. I started working alone by myself for two weeks. And then this made me feel alone, more alone. And when I didn't got an answer for something that was happening in a system or a bug or fix I should provide, I was feeling like a very strong impostor syndrome. Like, I can't do that. I can't do that. I can't do that. I don't know. I don't know. I don't know. And then this was gotten stronger, stronger, stronger. And I got depressed, very depressed and tried to tire it. But very tired. And I didn't feel like doing nothing. And I started to cry out time several times a day. And after that, I got a moment where people told me, you are depressed, you have to go and ask for help. And I asked for help. I go to an analyst and doing some psychology sessions. And I started to getting better. And from Bars Dev, I got support from a mentor that helped me to put the pressure down and know that it's like that. It's always like that. The problems are difficult, they are very hard. And they trust that I will try my best to investigate the problem and communicate with people when you cannot find some answer. So I have Bars Dev support from a mentor. And from Pinterest, I have a support from another mentor that was helping me in Pinterest channels and ways of doing things and where I could get more help. So from Pinterest, I have another mentor to support me inside Pinterest. So I got kind of two mentors. And I did my parts. I started to getting the help I had from every side. And the analyst, the Pinterest and the Bars Dev support was essential. And the family support was essential to get a piece of it. And after that, and nowadays, I am like socializing a little more with safety, with masks. But I am like, kind of getting back to the real life I had before this pandemic. So it's kind of this. This is what about I was trying to talk about the reality that it's under the hood and nobody sees. And sometimes people are passing by this also, they don't talk to anyone. So if you had only one phrase for you, it's if you need to go and get help, get help, that's hard to do this by yourself. Go and get help. Okay. So for our job, I'm using the word for marketing. I work for Bars Dev. If you want to work with us, you can send me your resume. Or if you're a company that needs software engineer and is spending too much time hiring, you can drop me a letter also. And I want to acknowledge Eric Juan Drew, Andre. I did a little typo there, Eric Juan and simple as consultoria. I really like to have today organized of the planning foundation. And to you that's watching the presentation right now, the Town Council Curitiba team and also Porta got from Bars Dev and from Pinterest during this hard moment. So thank you. Thank you very much, Ramiro. That was a good talk. Thank you for sharing all of that with us. I could relate to some part of it, definitely. And I think a lot of people could relate to different part of it. So yeah, it's very courageous to share that with us. It's definitely not easy, but definitely positive. You did good. So thank you again. It was nice to have you. I don't see any question right now in the in the slacks. So I think we're going to move to GT. So yeah, if you all scroll down on your loud swarm page, you'll see this button. Join GT here. So see you there. And thank you again, Ramiro.
What happened after a transition from a public institution to a international career restart in the middle of a pandemic.
10.5446/56658 (DOI)
Hey everybody, welcome back to track two of Plum Conference 2021. I'm here with Raje, who is a front-end developer at Ishkrat, and he is going to be talking to us about React and Guillotine integrations. Hello, thanks, Andi. Today, I'm going to present you what Guillotine reacts to. My name is Raje Buche de Güell, and I'm from Berga, Catalonia. I work in Ishkrat, and I am from front-end developer. I basically work with React and Angular. I have also worked with Python, developing Django, and Guillotine applications. Guillotine React was built by Jordi Folli. It provides an interface that to access all Guillotine content, depending on user permissions, and allows you to apply actions like create, modify, remove content. User interface, interactions, like display method, etc. All this flexibility to build in your way, adding your own content with your forms, your icons, etc. It's built around the idea that to access a framework, to roll out your Guillotine management interface, a layer that can be extended from outside. Before explaining how we can build our own Guillotine management interface, we are going to see a project that is built with Guillotine React. The project is Fidema de Tarragno. It's an aesthetic fair for an artist's proposal, which uses root resources, traditions, and popular culture as the driver of creativity. Thanks to Guillotine React, we can build an application to manage all fairs. Here we can see the public website. Here we can see some performance that this year had happened in the fair. For example, here we see a performance, information about this performance, the dates, and some dates in this performance. All this data are saved in Guillotine, and then we manage this data thanks to Guillotine React. Here we can see the application that we built with Guillotine React. This application had two main sections. The first section is in Guillotine React, the component in Guillotine React, that provides us to manage all data. It's inside this component. All data is in traversal format. For example, in this edition, we can see several types of data. For example, here there are some emails, some performance, some entities, some artists. Here by default, Guillotine React provides us some tabs to edit this data. For example, if we can edit the edition's data in Properties tab, we can modify it all of them. In Permissions tab, we can modify the permissions. These permissions, it's only applied to the current object. But if you want depends the setting, for example, allow or allow single, you can access to current object and its child. Finally, here we can see a vocabulary tab that allows us to modify all dynamic vocabulary. This tab is custom-tabbed, but Guillotine React don't provide us this tab by default, but we can modify it and add or delete some tabs in each content type in all objects in Guillotine. For example, if we see an artist object, here we have the vocabulary tab, but we have the actions tab. In this example, we can delete this object. The permissions is the same than in container type and the properties too. Here, but in the properties tab, we overwrite this view to manage the data different from the container. For example, here we have a button that allows us to change this state. In this case, the artist can be published or unpublished. Here, we can change this. And by default, Guillotine React doesn't provide this. We can override this view to manage, to modify and show different info and different actions. In the items, we can see a search by input text or by content types. Then here, we can add more objects inside the artist. For example, we can add some proposal, some agitation or some payment. Here, we can see the default form that Guillotine React provides us. In this case, only there are the ID and the title still. But if we want, we can modify it and overwrite this form. For example, in agitation form, we change it to ask different things. In this case, the name, the last name, email, typology and the language. Then, in the second section, we can see isolated things. This is an view that is outside from Guillotine component. Here, we can, the traversal and only we get some information with Guillotine client. Because Guillotine React provides us a Guillotine client that allows to connect with Guillotine and get data and do some actions. For example, here, we can download the text or send emails to this proposal. Here, we can apply some filters or sorted by different fields. Okay. Now, we will see a simple example. How can we build a simple example with Guillotine React and create our Guillotine management interface? First, we will see the login page, then the default content type. After manage users, the custom content type view and finally how integrate the application to Guillotine service directly. Here, it's an example, a simple example to Guillotine React. First, we can see some different interface against that in Fiddler's Mediterranean login page. This is because we can choose the context to do login. You can do login to the route, in Guillotine route or in some content. The difference that it's in in route, first, we can see all databases that we have in Guillotine. After this, we can see all containers and finally, we can see the container content type view. If we change it and try to login against container, directly we see the container. Because the login is against the container and not the route in Guillotine route. Then here, we can see the default tabs for container view. In this case, we have some add-ons that allows us to install or delete some add-ons to Guillotine. In this case, we have the data users and the image field. Then, we have the registry tab. Here, it's not implemented, but the idea is that we can get the Guillotine registry. We can save some information in Guillotine registry and then we can see it here when it's implemented. Then in behaviors tab, we'll see the all behaviors. In containers type, we don't have some static behaviors. But if we want, we can add some behaviors to this object only in this object. If not, we need to add it to static behaviors if we want to add this behavior to all objects in each content type. Then, we have the permission type like the other. Finally, we have the delete, move and copy objects and buttons to do these actions. Here, the login page, the container. In this example, we have the user. In this case, the user skates for us groups and users folder. Here, we can add some users to login to application and some groups to organize these users. By default, if you try to login with one of these users, you can't because this user doesn't have the correct permissions. Here, in the container, we have the training group that is Guillotine member role, Guillotine reader and Guillotine management interface user. This role allows us to login with this application. This is an example role, but by default Guillotine doesn't provide this role and their permission to associate it in this role. For example, if we create a user, like if a test, and password test, then if we try to login against container because this user is in container, we can't. But if we said that this user, we change in principal roles and we said, okay, this user now is Guillotine reader, Guillotine member. And Guillotine management interface user, we set the permissions. And now we can login and see all data. But now only see the items tab and the actions tab. This is because you can set some permissions to access in some tabs. If we come back with the producer. Finally, we can see the custom content type view. In this case, in this Guillotine, we have the demo type. This is our custom content type that we created to this demo. And in this case, this type, we have some properties, text field, right, text field, number field, choice, volume, list, date, and date time. And now we can see how to modify it or to override this view. If we go to the code here, we have the Guillotine component that provides us to build all application. And here in the registry, we can override it. The forms, builds, items, column, some components and behaviors component. For example, this content type, we added the custom tab. We override the default view and here we define the items, the properties, behaviors, permissions, and actions tab. But we added the custom tab. And then we define each permission to each tab. In this case, we set to see custom tab, we needed Guillotine a view content permission. If we want, we can delete it, for example, permissions tab. And then we don't see this tab. We can modify all of these. If we try to add some demo type object here, we can see the form. This form is overriding too, because to create this object, we needed these fields. Guillotine React provides us a required file form. This form reads the JSON schema that Guillotine provides for its content type. And then renders all forms only with required fields. And it's automatic. It catches the schema and renders all by the type defined in the schema. And we can see an example. For example, in the item folder, like in the first application, it's a default form. If we want, we can create a form and render that field or whatever you want in this form. Now we can override it, the items column. And here in the demo content type, we override it to add it to the past column. For each content type, we modify these columns and render which columns we want. Finally, in the property tab, the most important component is the editable component. This component reads the JSON schema and then renders the edit component when we try to edit this attribute or render component when, in this case, we can see the value of this field. For example, by default, Guillotine React, when we have a schema and the widget is text area or rich text render text area input. But in this case, we change it to render the rich text input. And here when we edit this field, we can see the rich text. Then if we don't override the render component, don't see the value, the purchasing value. We only see all HTML code and don't see the correct value. And here we can override it to in render field component. It's the same, edit component, but in this case, we choose which component wants to use when render the information. In this case, this component only parsed the HTML value. But then if we want, well, you have some fields and different widgets or by types, you can define it all components for each type or for each widget. And you can create automatically forms and to edit the information, only modify the interfaces in Guillotine. Finally, we will see how to integrate this application to Guillotine. Here in localhost, we have the server, the Guillotine. And Guillotine provides us to render JavaScript applications. To do this, we need to create the build in a React application. And then we catch this file and copy in a folder in Guillotine. For example, here we copy these files in static build folder. Finally, we change the Guillotine configuration and set in the JavaScript applications. In this path, you find the index.xml in this folder. And finally, if we visit this path, we can see the same application. But in this case, it's Guillotine that served this application. And one service, you have Guillotine and the application. And it's all... Thank you for listening and thank you for the Pronecom to give me this opportunity. Thank you so much for the fascinating presentation. I really love seeing the ability to integrate other web application UIs into Guillotine. I hope that everyone enjoyed this as much as I did. And we have the face-to-face available. If you have any questions or would like to interact with our speaker. And I posted it into the track too. And thank you very much. Thank you. Bye.
Guillotina React is an application that allows manage Guillotina data on web. We explain how we can integrate with Guillotina and how we can configure it in different projects.
10.5446/56660 (DOI)
And welcome back. With me now is Johannes Ragham, who's a really long time core Plone developer. He's been PatternsLibMaintenor. He continues to be the PatternsLibMaintenor, which is why we are here today to hear more about PatternsLib. How do PatternsLib with TipTap, which is very interesting. Johannes is working at SysLab, where he works not only on PatternsLib, but also Quave, which, as I hope you know, is the Plone-based internet solution. And so Johannes, please take it away. OK. Hi. Thanks for that kind introduction. I will start my screen sharing now. Give me a second. So the background, my tea water is getting ready. So I'm Johannes Ragham. I already was introduced, so I will skip that and start right into the topic of today. I will talk about the TipTap Editor and how to integrate it with PatternsLib. It's aimed for designers and integrators and as well developers. I hope for everyone there is something to take away. We will give an overview of what the PatternsLib is and show off the TipTap Editor. Got an interesting text editor and alternative to our tiny MC, probably, and show how the TipTap Editor can be integrated in PatternsLib as a pattern. That's probably the main part of the talk today. And then I will show you how we can get this into Plone. And as a spoiler, it could be done with the Webpack Modular Federation. It's something I currently experiment with and got some very interesting results. So what is the PatternsLib? The PatternsLib is a library of user-able patterns for designers. And with that, you can create user interfaces. And it's aimed to people who do not know much about JavaScript. It's a library invented by Cornelius Kolbach. His concepts go back to the year 2005. And PatternsLib itself was started by Richard Ackerman and Florence Liestow. They developed it in the year 2011. In the meantime, I am maintaining it at SysLab. And it makes a lot of fun. It was always updated during the years of the library. So we are using currently latest JavaScript concepts. And that makes fun. In the glimpse, how you can use the PatternsLib is you go to this website, PatternsLib.com, download, and download the latest zip file of the PatternsLib package and integrate it like that. You unzip it and integrate it in the website like you do with any JavaScript library. And then you can start using it. It's working in a way that it looks for specific class names. And if you fancy a specific class name, it invokes some JavaScript logic on that. And for example, this would give us something which is shown in the PatternsLib demo, a date picker. For example, here, you have a date picker. And there are many different options which you can use. For example, it can also automatically edit the end state, which would be here, after I set the start state. And also an interesting option, it can automatically format the date like that. This is the system overview of the PatternsLib. It works like that. It loads first the PatternsLib library. And then all the included patterns are being registered in the PatternsLib registry. For example, if you have the tourtip included and motor and inject and tip-tip, then those patterns are being registered. After that, on document ready, when the document ready event happens, the document is scanned for those CSS selectors which trigger the functionality on a specific element, like part minus date minus picker. It looks for such classes. And then it initializes the patterns. For example, it finds the tourtip and the tip-tip pattern on the page. Only those, both patterns are initialized. And the others not. And if there happens another event, like PatternsInjected, which is an event from by the Inject pattern, which does remote HxCores, then it scans again portions of the document and initializes patterns again on those portion of the document. What are the benefits of using it? It's, as I already mentioned, designed for people who do not like write JavaScript code. There should be the PatternsLib offers ready to use functionality which can be just used by designers, for example, to build prototypes. It encourages you to use semantic HTML. For example, you should not use a button where an ensure is better suited the other way around. It's developer-friendly in a way that it does not use any more outdated technologies like Require.js. We just got rid of that two years ago, one and a half year ago. And it allows you, it gives you an easy way to initialize JavaScript for a developer, for example. You can just easily create a pattern which gets initialized by the PatternsLib registry. And you can also easily pass options to the patterns code. And also, it reinitializes the JavaScript when needed. For example, if you get your path inject new HTML fragment from the server and want to inject it in your page, any patterns which might be in those fragments are being initialized. And then you have the functionality, again, without having to worry about manually initializing JavaScript code on the document updates. Also, mockup uses the PatternsLib. And we're currently updating mockup to use the latest PatternsLib with the ECMAScript 6 plus syntax, the most modern version, ECMAScript 20, 21 plus, and so. And we came quite far with it. Should be a topic of Latin talk or something this week. And the PatternsLib showcase. On the demo page of PatternsLib.com, you can find a lot of different patterns, like we have a calendar pattern, which initializes the full calendar, for example. And then there is a gallery pattern, which uses photoswipe as a gallery. Everything should work just as you expected. For example, if I click on the last picture, the last image should be opened, which is not just the case if you just would use photoswipe without any other option. Then we have also Masonry, for example, and a lot of other patterns. Not all the patterns are included here. And there are also add-ons, like pet tip tap, which is also not included here. But these are the core patterns. There's a little bit more to come. We are using the PatternsLib in the Blonde Internet project in Grave. This is just a prototype. This is not the Blonde-based version. These are just mock-ups. But the nice thing with PatternsLib and the way we are deciding at this lab is we have fully functional prototypes that we can just navigate through and use it as if it was a real application with back-end logic. And we are using the PatternsLib heavily here. There's a lot of different in many places. Also, the OilUp project, which we also developed in the Swiss lab, which is an online risk assessment tool for the European Union. And other companies who want to use that is using the PatternsLib a lot. You have a look in here. And we have to pass the login test with some valid data. And here, the toolbar we are clicking is injecting different types of content in the main body part. And yeah, that's PatternsLib. OK, let's continue with the tipped up editor. The tipped up editor is a new and modern text editor, which is based on PlosMira. And it's headless. It's a editor which comes with no UI. You have to build your own UI. And that's actually a good thing because you can just decide the way you want. It supports HTML and Markdown and whatever you are building it, whatever it doesn't exist for tipped up on PlosMira. And it's based on PlosMira, which means it also has a strict data model. So you have to try hard to mix up the HTML to look uglier. So for example, you pass a strong element to tipped up to PlosMira, and it changes it to the element you have configured like a bold element of B. And it has awesome features. It has built-in collaboration support. Tipped up specifically has collaboration support where you really can work together on a document. It supports mentions. You can also have checkboxes, which you can click on like on GitHub and a lot of interesting features. Let's look at the website. It probably takes a while until it is initialized, or it's already here. And you can see this is already a collaborative editor. I can write something here and open the same site in the new browser window that should immediately update the other one if I am OK. Maybe not. But OK, that's a demo thing. But the collaboration feature, I tried it at the right time, so it's really amazing. Let's just continue. We have a pattern slip integration. You can find it here. And we already have it in the prototype demo, which I will show you. That's our tip-top integration. We have, for example, just normal text editing features. And link overlay, if I click here, then I can visit the web page. Normally, you are not leaving the web page because you are immediately in editing mode if you have the correct permissions for that. I can open the edit link page where I can just change the text. And insert an image, for example. Select from the list here. Yeah, that's our tip-top integration. There are some tables which ProSmirror didn't support a while ago. Yes, and we're looking into how to build the UI today. So how can this be done? First, let's just start by adding a text area. Adding a text area with the class part tip-top. Here's some sample content in the text area contents. And let's look at the example how this looked like. This is just a basic design, nothing fancy, like we have seen before. And this is tip-top. Currently, without any features, you cannot even make anything bold, but it's already initialized. Let's look at the source codes just quickly. The text here is being hidden by the tip-top pattern. And the tip-top container is added where the ProSmirror editor and the tip-top editor are initialized. The content, when you change something here, is synchronized with the text area value, actually. Let's continue with the toolbar. Here, I add some buttons to a diff element. And the buttons need some specific classes to be recognized by a tip-top. This is documented in the rhythm of this pull request I've shown before. And this is the way we built the part tip-top is part configuration, but also a lot of conventions. So you have to have a button heading level 1 to get the functionality of setting a title with an H1 in an H1 element. And you can connect the toolbar with tip-top with this configuration. You say toolbar external and give the toolbar external at CSS selector where it confines this toolbar. It's the same like this idea. And it looks like this now. Unlike before, we also have the H1 functionality now initialized. And we can set something bold, Italy H2, and stuff you expect from a text editor. But we have used our own HTML structure for this toolbar. It's gone. If a link overlay, we can edit links. For that, we add an engine link here with which search is for a structure in the HTML document for the model which should be opened. It has the pet model class, which is the model pattern from the pattern slip. And yes, this here is the model link, which is then used by pet model to open for the model to fill in. You can also use extra document from somewhere from the server side, for example. But in that case, this comes directly from the same document you already have loaded. In some cases, this makes sense because it's just faster. And then you have to also configure your template with some conventions. The tip-top editor searches for a link panel class. And after you confirm by clicking on this button with tip-top confirm, it searches for a tip-top href element and a tip-top text element. Or whatever it finds. If it doesn't find the text element, it just uses the href element. And updates then the selected text with the value which you have written into the href. And you also have to configure it in the text area by defining for which class tip-top should look out to find all those input fields. But in that case, it's link panel. So you configure it here. And it does look like that. So without any fancy design, just to show you functionality, we can now add a link here and have it linked. If you click on it, that's a tip-top configuration thing which we added to a tip-top. If you click on the link, you are not redirected to the link. So you can still continue editing. And now let's add a context menu that gives you some options when you click on the link. Again, we define here a template with an HTML structure which should show up when you click on it. And you should show up when you click on the link in a tool tip by tip-tipit, the tool tip library which is used by Pat's tool tip, our patterns-tip tool tip pattern. And again, you have to follow some conventions. Instead of reading everything out loud, it's documented in the the read me of Pat's tip-top. You also have to configure it in the text area. And in the end, it allows you to have an overlay when you click on here. I can edit it or do something with it. But the point is you can just style your own text editor the way you want. There's a lot more to come, actually, like an image overlay which is a bit more complicated. I added here just to show you the possibilities I added here in the regression structure which offers you an external and a selection widget where you can just add an image to an external link by adding it in the link overlay and the selection of different images. And those are injected with the Pat Inject so you can just load the contents of this widget from the server. And actually build your templates which you want to show like the image selection on the server. These are the two different templates. And I show it just right to the reset. You have to configure it again. The reset looks like that. Then it actually does. If I can click here to external and add an image here from pre-selected values. But there's a design glitch. I now have to move. Let me see. Can I scroll down? Yes, I can. Yeah, there's just no design yet. Just edit this image here. And let's move on. It can make small improvements to the image overlay. In that case, I added to the second panel the trigger autoload option which would automatically pre-select the image selection when you open the image overlay instead of selecting nothing. And the other thing is I added two path forward patterns to the radio buttons which make up the image selection items. And if you select then one of those, the submit button is automatically clicked. And then the image is automatically inserted. And you don't have to click on the button anymore. So one interesting thing actually is the selection here works like that. TipTap expects a TipTap minus SRC input field where the value points to the url of the image which you want to insert. And here the selection is just done with radio buttons. You have multiple of those TipTap minus SRC radio buttons on the same page. And the label is actually a preview image. And it allows you to build the image selection like this. It's also a similar like the other ones. Let's wait to do the example. I have the source now here. I can also change it as we move the welcome part. It's still updated. OK. That's also an interesting thing. TipTap offers a plugin for mentions. And in our pet TipTap integration, I extended it also to support tags. Again, you have to configure the text here, the TipTap pattern, then create a template which should be shown when the mentions tool tip or overlay opens. And it's similar for the tags. And it does look like this. Could be that it will throw out of the box. You can now select something here from this list. And then a link is inserted. Or with tags, just similar. But then there's feature. And this is now just a link with a value which you can configure. And what you do after that is up to you, for example, if you want to notify someone who is mentioned in the text, then you would have to just pass the document for mentions and notify the people then. Good. This page now shows how to add autofocus and placeholder support. I will just show you the final result now. Here, if I remove everything, then we would see the placeholder. Do we? No, sorry, it doesn't. But the placeholder could have seen this. Oh, forget it. Just a small thing. However, it's still a work in progress, of course. And we are just right before using of testing it in the field. And then it should replace the text editor in Chrome Internet, which we're currently using, which is the vector. So how would the clone integration look like? The goal of integrating a bundle like this would be to add an on without recompiling. That's something which is hard to achieve in clone 5 currently. It should have no code application, so you just don't want to load libraries you already have downloaded again. The bundle says it should be as small as possible. And the performance should be good. As I said, it's hard to achieve if clone 5 requires to make some promises regarding that. But actually, it couldn't deliver and was not well understood. But there's now quite a new technology which is called Webpack Modular Federation built into Webpack 5. And this is something which helps out here. The thing is, you can have separate bundles and define dependencies between each other. So you can, for example, from one bundle depend on a module of another bundle. And you can also define libraries which should be shared between those bundles, like check really, bundle sleep, et cetera. And define exports, modules you want to be used by other bundles. And define modules which you want to use from other bundles. And you want to fall back to all your own dependencies if those other remote bundles do not exist. So actually, you are bundling a whole bundle with all the dependencies. But in the end, you don't get just one big JavaScript file, but many small JavaScript files. And if one bundle depends on something which was already loaded, Webpack can just recognize it by use of global variables, which Webpack inserts and uses and uses that dependency instead. But it's an overview again. The browser would not download pattern sleep again if it already has one pattern sleep library. And it even works with different versions of the library between some configuration constraints. You can say, you need this version with minimum, and then this version with minimum will be used. If it's already loaded, then the loaded version will be used, otherwise the one which you have shipped with will be used. Yeah, it's quite a new technology, I have to say. And the documentation on Webpack itself is currently not in the best shape, and many information are missing. But there are many tutorials on the net. And yeah, that's it. This here is an example Webpack config where we add this module federation support. Actually, this config is just a minimal config because it extends the bigger config from pattern sleep. But the part which is interesting for a module federation or which gives module federation support is just this one. I forgot to highlight the import here. Import the module federation plugin and add the plugin to your other plugins or to the Webpack plugin array and configure shared libraries like pattern sleep or jQuery. That case. And another thing you have to do is to slightly adapt to your entry points. The entry point is the site in file which you will include in your HTML page, with where it's one big file or just the entry point which would load other dependencies. And you have to change it in a way that you just add this one input statement for your previous entry point. If you already were using Webpack, then you would have already an entry point. And you will just input this other entry point. And a nice side effect of that is that Webpack would split out all the JavaScript code except this little configuration, including the boilerplate which Webpack adds on top of it so that it can find all the other dependencies. And this gives you a bundle size of just a few kilobytes, initial bundle size. And then, of course, all the other dependencies are being loaded. But if you use lazy loading, which we do in pattern sleep, then only the libraries which you actually are using will be downloaded. And I can briefly show it a prototypical implementation in Plone. Let me get everything to the correct workspace. So here, in just a normal Plone at all, I added a static directory which is called collective tip tab. Static directory looks like that. This is the static directory here at bundle. Pet tip tab is with all the dependencies which are necessary. Here's a bundle.js and a lot of other JavaScript which is generated by running make bundle in the pet tip tab, pattern sleep add on. This is the generated output and I just put it in here. What's next? This is the bundles XML registry configuration. And you define a bundle here like you are used to in Plone 5, except that you actually only need some values. You don't need to add this last compile date, last compilation date, and start modules that's just not necessary anymore because in Plone 6, not merged yet, in the ES6 branch, we have the new registry which you will see in a second. Also, I added a test browser view which actually just includes all the code which we have written before in this tip tab tutorial. This is not using any set3c formula or anything of that. It's just work which has to be done, but this is a part which is well understood in Plone developer. And let's see how it looks in the browser. We have here the source registry, the new one. You see those fancy wrappings of all links, which is a test module federation bundle which I included here. It actually comes from a web-backed development server which constantly recompiles when they make any changes in the pattern. So I just included the URL from a development server here. And then now we just exclude it and save it and all those fancy braces again. The default Plone bundle is currently also included from a development server in the bundle, but the tip tab is included as we are used to. And this is the new resource registry, which is really a nice thing. And it also does not depend on a resource registry JavaScript which could be broken sometimes. It's just a back-end rendered template which won't break. Good. And this is the tip-tap integration. I reload this page to see what happens in the network tab. Now all those resources are downloaded. The pattern slip core dependencies should not have been downloaded twice, although there's a little bit to tweak and optimize here. But I can have different bundles with cross dependencies without recompiling here. Good. And the pattern slip, this tip-tap editor works as we have seen before. Of course, this needs a lot of styling and so. But yeah, that's basically it. Back to the thing. We are almost at the end. And I think we almost have no time anymore. The outlook, what has to be done is we need pattern slip to support WebEx 5. There's already a branch which is almost ready, but it needs a little more testing. And it needs all our external pattern slip at ons, which we are using to be migrated to WebEx 5 also. And then we would just make the switch. We would need mockup ES6, which is currently quite far. It's not ready. Tests are not all tests are running through. Not all patterns have been migrated. But I would say 80% or even 90% is done. One of the bigger things is the file editor, which has to be migrated, still to be migrated. And then this whole thing, including the mobile federation stuff, has to be tested and improved. And that's it. Thanks for your attention. Thanks, Johannes. It's always fascinating to me to hear and watch how you show off patterns live and mock up. And to me, it really seems like magic. But it's also cool to see this new editor. It sounds a lot more powerful and more flexible, easier to integrate. So thank you for your great presentation. And for those of you who are not yet
tiptap is a headless editor which allows a fully customized UI and which has a strict tree based data model ensuring clean HTML code and allowing collaborative editing. We've built the Patternslib integration pat-tiptap at Syslab which is replacing our old HTML editor in Quaive. This talk shows how to use advanced Patternslib usage patterns to build the editor's UI including a custom toolbar and overlays and popups for image selection and link editing. The tiptap Plone 6 addon collective.tiptap also shows how to extend Plone with new JavaScript functionality without the need to recompile any bundle.
10.5446/56662 (DOI)
Good morning everyone. Welcome to day four of the talks for the Plone Conference. I'm your moderator for today, Chrissy Wainwright. Today I am introducing Rafael Nunez. He is a longtime Zope and Plone developer. He's been around the community since 2002, so quite a bit longer than me. Works for the City University of New York. Has done lots of things with flown and different add-ons. And today he is going to talk about institutional assessment with Plone. So Rafael, you can go ahead. Thank you. I'll share my screen. Okay, so institutional assessment with Plone. And I will explain the jam later. Okay, so Chrissy already introduced me. We run the York website, which is Power by Plone. We run a whole bunch of different systems within the college, like the parking permit system, faculty reappointment, they're all running on Plone. So in 2019, the college went for free accreditation and we were giving a warning. The warning was to improve our internal system of documenting assessment results, actions plan, and follow-up strategies. This is why the middle state commission of higher education, and we had a deadline of March 2020. The reason why this was happening is because there was no institutional assessment, there was no tool, college-wide for institutional assessment. Units were using Excel, Google Docs, Blackboard, or whatever tool they wanted to use at the time to collect and gather the information and fill out the documentation. The department shares will change over time within the accreditation period. So sometimes teams may get lost in the process when they're not passed properly from one share to the following. And the plans, even though they were available online, they were being distributed as PDF documents. So we had a web page which had all the different units as documents, plans, and reports, but there was no way to correlate the data. If you wanted to look at one particular part of the plan that correlates to different units, you couldn't do that because there's a whole bunch of documents all over the place. So we have eight months from concept to having a full year of data in this system. Eight months we have to develop the product, which didn't exist in our institution, as well as getting all the units. There were about 108 different units. There's 49 academics, 53 administrative units, and finances were an issue at York. Yeah, we were broke at the time due to contracts. The city of New York delayed for years and then they have to like, pretractively pay the whole bunch of salaries to everybody. And my team's way under staff. So the new director of institutional assessment called the meeting and asked if we can build it. She didn't consider getting another system because she has experienced with several in the past different institutions before she came to York. And they were very expensive for our budget and they didn't meet the needs. So they asked us, can we build it? And we say yes, we can. Or yes, Plon can. So we got the requirements. During the meeting, we got the requirements. The visitors should be able to look at the unit's mission and goal. The coordinator will be able to create the updated mission and goals, as well as plans, mid-year check and reports or annual year report. Then they will submit it to the director, which will review the information. And it will then be submitted to the VP for approval. They will look at the information, make any corrections and approve it. And then it will go to the committee. So there's a committee in the college, one for academic and one for non-academic units. And they will review the plans, mid-year checks and reports submitted. And they will make comments on it as well. And they will approve or project. So those were the basic requirements. So we created a plan. This was, I mean, we met during the summer. So I had to take time off because it's either user or lucid since I'm over my cabin in Anual Leaf. So by September 1st, we need to go live with the missions and goals. Phase two was going to be September 15, where we're going to have the non-academic unit plans. By the end of the month, we're going to have the academic unit plans ready to go to start populating. Phase four was the mid-year check. Phase five was the assessment reports. And this is now in January. And the reason why there's this gap is one because my team is on the staff and we have work to do in addition of creating this. But also all the units have to populate all this information. They need to gather data as well as creating the reports, mid-year checks and plans. And they have a due date when they have to do this task. So our phase was about 15 days before they have to start uploading and submitting information. That will give the units enough time to get trained on how to use the tool, how to upload your plans or mid-years or reports and be able to start uploading and submitting the information. And the last step is the institutional assessment department report. But they can see the big picture. And I will get to that in a second. So hammer time. It's time to start working on the tool, on the system. So we already have PLON. We are using PLON on everything that we develop within my team. It's usually PLON, not necessarily PLON. There's a tool for everything. But PLON fit most of the stuff that we do. We've used the distribution of PLON CASL for, we run with CASL for our main website. Because it had a lot of the things that we were going to do. It was the way we wanted to build PLON at a time we migrated in 2017. So instead of having to create all these different components, CASL had the distribution of things, I have envisioned for the PLON site at time. PLON has content rules. So that's something that we use for notifications every time they say transition. The notification will go out to the respective party. We needed to add the MEL2 role add-on for PLON. Because the default rules, you only have like reviewer, reader, and a couple of other roles that you can send to. With the MEL2 role, it allowed me to add new roles to the PLON site and being able to send notifications to those specific roles. LDAP authentication we use AD in the university. So our website is connected to that. All our faculty, staff, and students can log into the content management system using their AD credentials. Working copy and iterate is very important to me because we want to keep track of the changes over time. And pretty much you can play if something goes wrong. You know who to play. Workflow policy support. Our site is not only a public-facing site for our students but it also has a lot of sections that are intranet. So we use a workflow policy to set different policies in those areas. So if you were to log into a section of the website, that will be an intranet site. Also, all our faculty, staff, and students already have access to the CMS and a lot of the units are using it to update their own content, their websites. Faculty are using it for reappointment. We have another system which is a personnel and budget system running on the site which is how the faculty gets reappointed. And that's probably another talk after I upgrade to the Xperity or Vault tool. So in phase one, we created a unit's content type which is a container and goals. We created new roles, one for coordinator, department head, MVP. And the new roles, I in the past have been using Plum for a while. So in the past, I will just go to the CMI, add the new role, and I will have my role there to the CMI. This time, since I was creating that on, I also created roles to there because I want to have the auction from the sharing implant to be able to select those roles when given permission. This is restricted to site administrators, so only site administrator can assign those roles. And then we created the workflow. For the workflow, we created it through the web and then export it and put it into the product. We also created a view that will allow us to display the information the way they want. They have a specific vision of what the system should look like, how the documents should be presented. So we created a view for that. Also, this view will filter so the missions and goals when the documents are published are publicly available, but there's other portions of that template that are not displayed to the public. Okay, so our initial deployment. We have phase one ready. We have all the content created, and it's now time to deploy it. We plan a maintenance window. The website will not go down. We can keep this site running, but it will go into read only mode because we have to run build out our master server. So we have to announce it. So we announced the community that we'll try it will be in read only mode for like an hour. And before that, we have, I usually create a content on my machine locally, tested in the latest version of Plon. Currently, I'm probably in six of my box. So I'm sure that the product is up to date and working on all the versions. We deployed to our test development machine. We uploaded to our GitHub, our GitLab. And then we will upload tested on our clone of our production machine, our development, our castle development. Just to make sure, since I'm testing on the, I'm running the code in the latest version of my machine, but it's a Plon site is running castle, which is a different version. Just want to make sure that everything is, it's, it doesn't, doesn't break anything when you move it to castle. In some cases, I noticed that there's some, because of the different versions. So there's some things I have to change to make sure that there is working. So we test that at least a week before the go live date, the day before go live, what we do, everything is tested. We first upgrade our, one of our mirrors image of the master. Make sure that one will come back up when after running build out in non-news models. Just to make sure that nothing else breaks. And if that goes, I heard into plan, then I will upgrade mirrors two and one. So the day before the, the schedule maintenance, my mirrors are ready with a production code. All I have to do is bring down the master from build out and restart the clients. So, and that's exactly what we do. So after the initial build out, initial deployment of that on, we no longer need to run build out again. And we do this by just doing a, a kid pool on the different servers. Again, I will usually start with a mirror. If I can break a mirror, I have two other ones that can handle the information. So we will start by doing a kid pool, get the latest data tested on the, on one of the mirrors, make sure that doesn't break. Once that has been successfully run, they will run it on my other mirrors. And then I will do a kid pool on my master and sequentially restart the class of clients. So this website will never, never goes down, never goes into it only this way. So, and we're going to need this because on every face of this project, we're going to be uploading, updating the code and, and, and deploying it. Sometimes I will need to pre-install the product on Plum. I was doing this with a portal quick installer, which is going away or it's gone by now in Plum 6. Otherwise, I will have to go into the uninstall it and then pre-install it, which is pretty much the same thing that the portal quick installer, pre-install will do. Okay, so I guess it's demo time. Okay, so basically, and I'm putting this on my local machine here. The units are very simple. We have a vocabulary for the different units. So I'm just going to pick information technology. And then we have the division. This is part of administrative affairs. And then I will put the mission. So I'm just going to throw in some mission here. So that creates my unit. Like I said, again, we had 108 units at the beginning, which we had cut down last year because there were too many administrative units. So they were condensed administrative. So we, we have the mission and goal for this, this unit. And now we can add the goals. When we had this ready and deployed to the server, since we already had the missions and goals on the website, we started copying that information over. So we had a part-time worker just go through all the different units, create all of them, copy and paste all the missions and goals. I'm just going to add a new goal. Again, I'm just going to make something up here. First goal and then I'm going to add another goal. And you can add as many goals as you want. Now, you notice I didn't use the add button on the left menu. You can still add it through there. But the reason why we did it this way is to guide the users on what they need to do. In addition, we didn't create views for goals. We wanted to see the unit, not the individual goals, but they could if it needed. In addition, that allow us to present information in a more comprehensive way for them. After the mission and goal was populated, the units were published. And on their websites, instead of having data static missions and goals, we switched to a Ajax call to pull that information from the Jam system. Now, this information up here is publicly available for most units. Right now, this is private, so it's not going to be publicly available. But when it's published, it is publicly available. The table below, which I don't have any information yet, that's only visible to those with permission to add content or review the content. So in phase two, which was 15 days after the initial deployment, that will give us enough time for the units to be populated because we're going to need that first before anything else. So we went into phase two. In phase two, we created a the unit assessment plan, which is a container. And that container will also add a whole, the content, plan activity, programs, plan rationale, and collection evaluation. We created a new workflow for this content type, which is also followed by the containing objects in which it will follow what I described before. The coordinator can create the plan, then it will submit it to the director, and the other director will review it and submit it to the VP. And the VP will then approve it and it goes to the committee. And the committee will publish or publish either internally or externally. The workflow allows us to publish internally or externally. At the time, this information needed to be externally published, because of our accreditation. It's now most of the content is internally published. And they can switch back in for us, need it. And a new view was created for this program, for this unit assessment unit plan. So back to the demo. So I'm just going to create a new plan. We have a little instruction here, a little posted note, you can say to let the users know what to do. And then they can add a new plan. For the plan, all they have to do is pick the year and who's the unit director. So they pick the year and the unit director. That's all they choose. But that's not all the content that is contained by this content type. After you save it, you're going to see that you have a new year here. And this is organized by year. So we've got the year, the plan, the mid-year check, and the report. It will save you when the document is done. It will save you when the documentation draft after it is submitted, it will show the date I was submitted. Okay, so this is the new plan. So as you can see, all we selected was the year and the unit director. And this pre-populated a lot of the information. So we can see we already have the year, we created a title based on the information we have in the fields. We got the year, which was selected by the user, the unit, which we pick up from the parent. When we create a plan, our vocabulary will look at the parent's unit name and will bring that information into the plan. We do this. So we have every plan, every report has the information about the unit, the division in it. So we have the year, the division, the unit, the director, who created the plan, other contributors, and the date. The date is currently planned because it's still in private. Once submitted, it will have that date there. Now we have also the mission and the mission is currently taken from the unit mission. So they don't have to type in. The idea is the information is consistent. So the less you have people typing the information in, the more consistent it's going to be. And then we go into the assessment plan. We have a little yellow notes on the things that when they need to do something. Usually we sometimes will do the blue bar when they can edit a section here. So if I were to add a new outcome, this is a child object within this plan container. And as you can see, we have a vocabulary here, which will basically look at the unit. I have two units with the same information. It will take the unit's goals and that you can choose from. So the goals are consistent. So if you declare that your goal is this, you will be able to select that on the plan. This is my unit outcome. This is something better. It has to align. The expected outcome has to align with the strategic initiatives for the college. This is another vocabulary that we created. And we have also the institutional learning outcomes. So they have to select an institutional learning outcome. So it's a random. And what is activity? So you have to document this. What are the steps that are going to be done to get to this outcome? Well... Okay, how are we going to make measure success? Let's say number of downloads for now. Number of... This is... S-I... Number of... A little bit... Okay. Let's see... There's... And how are we going to collect this information? Is it direct or indirect? And what's our target? Increase... First, 5, 10%. Okay. So when you save it, you will still looking at the plan content type. And you will see your units outcome here. And the information you put in. The rationale for the plan, again, this is another content target will be added to this object. So when you click on that and you have a text box here. Otherwise we're going to run out of time. And you can add as many as you need. That's one of the reasons why we are creating this. This thing also have multiple paragraphs. And again, the data collection. Once they create something, that little label that says you need something here, disappears. So now this will go to the review process. So again, they will just transition it. Like they will normally submit to, in this case, because of my role, I can submit to the department head or chair, or submit to the VP or Dean. I'm using the slash here because I'm using the same workflow for academic and non-academic units. And they call the VP, it's not a VP on the academic side of being the department head. It's not a department head. It's a chair. But they pretty much have the same roles. So after this is completed, it will be submitted. And the workflow will change. If we go back to the unit, you can see that the information here changed to the time when it was submitted. So that is time. Then, let's go back to the presentation. Because we don't this part. Now we move into phase three. This was done about less than a month after the initial deployment of the add-on. And this had the academic side of it. The academic and non-academic are very similar. Similar enough that you can copy and paste most of the code, but different enough that you need two different content types. And they have different fields, different requirements, and different labeling for the information. In this case, the academic units, they have a different plan activity because they have different requirements. But the rationale, plan rationale in collection evaluation, those are the same. But it has a different view as well. And I'm not going to show this because if I do, I won't have enough time. But it's very similar to what I showed you before. Then we had the mid-year check. This is very simple. It's just one content type with some choices and some text area. So we created the content type and we created a view for it. I'm just going to show you real quick. So when you do admit-year check, as you can see here, you don't add it from the menu, though you could. But that's not the way we tell them to do it. They add it by clicking here. Then again, basically the year, the unit director, they can choose from the menu. You can search this about 10,000 users that they can search by. So, and basically this is just a checkbox of whether you're on track to meet your goals or your outcomes or not. So I'm just going to say yes, yes, yes, yes. For now, I'm going to put it here. You can comment on any of this. The word optional is there. It was requested by the users because they thought it was, even though it wasn't a require field, they thought it was require. So they were asked to add the word optional on the label. Oops. I can select this stuff. Again, using all the plum mechanism to create forms, so the sterility and to validate information. So that's our admit-year, which is very short and very simple and straightforward. Then we go into phase five. This was a little bit more, we have more time, which was good, because it was required a little bit more of fields, content types, and also we have to modify the workflow to accommodate this one. We also have a report view, which is a view for the parent content type, and we have a generate report. Basically, the report is not created manually. You generate a report for the year. When you do that, what it does is that it grabs the plan outcome from this year's plan and bring it into the report in addition of grabbing the unit name, the summary, the mission, all that from the parent. But it also looks at the year before. When you generate a report for the 2021 plan assessment year, you are taking information from the 2021 plan and you're taking information from the 2020 report. I will show you that in a second. I'm just going to generate a report for this year first. In 2019-2020, the report was a table because that's the way they were doing it and that's the way they wanted to present the information. When it is created, you can see that we brought in my goals from this year, since we don't have any data on changes implemented, that is blank. But if we create a plan for this year, it will bring the changes implemented from the assessment report from the year before. Again, this will give you a little edit action where you can modify the information. Some things are frozen, so they're only for displaying the information, their store, what the user cannot modify at this time because on your plan, you say this is what you were going to do. But there's a few things that you can do. So it will bring all the information. Now you will see what's the target met or not. What are the findings and progress and the use of results among other things that are here. I'm going to cancel this because I want to show you the one with the yams. I created this template before. I'm just going to run the report for the 2021-2022. You can see that you have the blue line for things that are good. You don't need to add anything else here. But also it shows you the orange line on things that need completion. It will tell you complete this section. So this section needs to be completed. It will also bring in the changes implemented. So these are things that we reported on the 2020-2021 assessment report. So if you look here, we got our outcome one test, test, test, test. And outcome two test, test, test, test as well. So when the report for the 2021-2022 is created, you can see that those are already put in. And now we have a little complete this section posted now here. So they know that they have more work to do. So when they do that, it will bring in all the information. As you can see here, this unit goal is no longer exist. It's not it, but it's not changed since it will be storing the data. They cannot change the unit goal now because that was the goal that was supported last year. They can add a few information here. So if the change was implemented, they can say that. What's the current status and whether they, they was a budgetary consideration. They're not using this for budget yet, but the plan is that the decisions about the budget are going to be highly influenced by your reports. So let's say this is spending approval. So now that section is completed and you can see how it's no longer has a little posted note in it. So the user knows that it is completed. This goal needs to be completed. So if I complete this section, target was met. I don't know the findings. So progress may I just gonna say progress. So we can use of results. So we can say that. I'm just gonna do test. I'm not out of time for typing stuff here. Test. Budget I consideration. Yes, question made it to the farming head. Unit communication. Yes, I communicated with everybody. I'm going to send an email and say that and now my unit are complete. So now I have a full report that I will submit to my director. They will review it. Wait, I'm missing this part. They will review it and they will pass it on to the next to the VP for approval. And so on. Also, they can upload attachments here. So if I have an attachment of rubric survey or other. That can be uploaded. And my Chrome is acting up this week. So I cannot even set files anymore. See if I can drop works. Yeah, that can work works locally. So now you can have that attachment there. And again, because this is a, the reports a container. You can add as many attachments as you want. There's also other accomplishments. So if you, if you did something incredible and you want to highlight it on your own. On your report that's relevant and you can add that information. Again, you can add as many as you want. Okay. In phase six, then we went into the, into the institutional reports. And basically this is we created a whole bunch of endpoints to connect the data to power BI. In the past we have created reports in plan using where you will show you nice charts. But because we are developing the institutional research solution assessment. They are already using power BI. It was easier for them to see the information in power BI. And they can slice and dice the data there. So reports were created for this. So basically we created a couple of power BI views for this we created the data to plan using the CSV format. So the information is pulled from one to power BI. And now the institutional assessment. They can say see there's 108 units throughout the years they can select the specific year if they want. So this is 2021-22. So the first year we had 102 plans in play submitted. You can see the distribution of plans reports. You can say you just want the administrative ones and that will narrow it down. So power BI. This is something that I probably will make a view in plan. So they can get even more of today information. This has to be refreshed. If I create this in plan, it will be real time data that we can just just pull in directly and present it. So from here they can see how many was this state of all the submissions. They can narrow it down to a specific one. They can also see the specific strategic goal. Let's say they want to see this goal how many people how many units in using that that can be done as well. So this is just show that. So this is one for an overview. This one for the different units as well. So rules and mosaic. We use rules as I said before the standard rules did not meet the need that I had for this product. So we added the add on that allow us to send emails to specific roles. And that's something that I didn't show. So let me just show that real quick. My sharing. The is coordinator director is the apartment head. Share is BP beans are some of the roles that we added with this add on and they're available through the web. So in order to to be able to notify people with those roles we have to add the add on. And that took care of that we send notifications to the coordinator with the notifications to the committee with the notifications to the BP with the notifications to the to the to. If anything is sent back at any point which can happen on the notifications are sent back to the coordinator or creator. We are using mosaic for the layout of yams or jams is time for your college assessment management system. I didn't come up with a name. I didn't agree with the name but they are using the name and the name stock. And that's why we have the jam at the beginning of the presentation. So the view when they go into jams, they can do it through the login into the website is in the dashboard is a link to jams, or they can just go to your technology to slash jams. And they will get this page which has some information about training. We have the AC, which is academic program assessment, and we have the ASS, which is a non academic program assessment which is administrative educational students boring. On that view as well you have the pending review items. We got a column for the pending unit, coordinate unit head. And then we have the pp pending committee. And this is all done with mosaic, because I, if I have to develop it will take me more time than just doing it through the web. And it's the same so if you use to use to use to use to reduce recycle. And then we had we created 25 dexterity types, content types, some are folders, some are not. We create the 22 views or a script to generate the content. We created 11 vocabularies are used to the side, a lot of things are clearing content. The goals vocabulary to make query for that particular unit goals. And the year for the year. I will talk about that in a second. We created workflows on that we needed for all this information. We created you need three new rules that I mentioned before, and power BI report. So, miscellaneous. I recently updated the year vocabulary to exclude items plans that are ready created or been years are ready created from the list, because the faculty that users on they were selecting the same year twice so that was creating two same year, but we only show one so they will start creating a little bit of confusion there so we'll just add it modify that so it was a query. Make sure that if that year exists for that unit for plan don't show that on that plan menu vocabulary of years. Table is outcome on the 2019 like I mentioned before. No, not that one. So the 2019 report, the information was presented as a table, which is, they were used to like this spreadsheet to have that information. But that doesn't work on the web because the web is a lot narrower than the table is. And it doesn't work for printing because you cannot print it. You need a really wide piece of paper. So the 2019 looks like it still use the table, though we can change it very easily. There's other fields that changes as well. So the 2019 year after that, the view is this, this structure all content here, because it's easier for printing. And it can, it does fit on the screen. So you can get the table here on the changes implemented because those have less columns in that fits on the screen. And when you print it. Okay. Cut you off. No, no, I'm good. I was done anyway. That was my last lie. Okay. Well, thank you for showing us how your organization uses flown I think it's it's always interesting to see how other developers solve problems. Thank you for watching. If you can join in jitsi. If you have any questions for Rafael go ahead and join him there posting the link. And thank you Rafael for speaking today. Thank you.
Rafael will describe the creation of the York College Assessment Management System (YAMS) plug-in for Plone. Using agile development and Plone's built-in functionality such as workflow, versions, roles to create a system that serves the College in having standardized but flexible, has continuity as leadership changes, and serves as a repository for accreditation.
10.5446/56665 (DOI)
Hey, welcome back to day two of Plum Conference 2021. And I'm really excited to introduce Timo Stolenwork, who will be giving our keynote today. He is a huge contributor to the community. He's basically the shoulders that we all ride on. KidConcept is the driving force behind Volto. Timo himself is a member of so many teams, it's almost hard to believe. Framework team, release team, the REST API team, and of course the Volto team. So definitely very experienced and we're very excited to have him give us a talk about Plum 6 and the state of the art, JS front end Volto. Timo, whenever you're ready. Hey. Hey. Thanks, folks. Power is nothing without control. When I was a kid in the 80s, I started to play tennis. And the way I was teach tennis was to try to hit the ball as hard as possible, as close as possible over the net. This was the way tennis was played. In today's modern tennis, everybody plays topspin and all the players and even the youth players. So when you hit a topspin, you basically slice over the ball. So you touch the ball longer to give you more control over the ball. But when you learn topspin, your trainer will actually tell you to slow down at first, to have more control and less power. So you sacrifice power to have more control. And in tennis and in most sports, it's all about the balance between power and control. And in technology, the ultimate power tool looks something like this. So in a remote control like this, you have all controls at your fingertips. And it's a perfect tool if you're a technician. And I'm pretty sure that this particular remote control has been built by technicians. The problem here is that this is actually a human-to-technology control and interface, or has been built as that. But actually, it's not. It's actually a connection between a human user and technology. So at this point, I usually showed like a CMS user interface that was built by developers, for developers. There are plenty of examples. Usually I showed type of three. And you see a busy 90s user interface. But to be honest, like today in 2021, it's an easy win. Because when you look at those traditional CMSs, they have something in common. They all have an outdated technology stack and a pretty old school user interface. They're kind of stuck in the past. And I had a look and a little bit of research before this talk. I look at type of three, Juma, Drupal. And they all ship with pretty old versions and a pretty old technology stack. Some even have JQuery UI in there, which is totally that. And when you compare that, like clone five does not look that bad after all, to be honest. Because we have far better user interface still. But we have an outdated stack. So that's an easy win. But there's also an elephant in the room, which is WordPress. Because WordPress is the market leader in CMS. They have a market share of like 40% or sometimes even more. And WordPress is actually a pretty modern CMS. So they made the transition that we're about to make to a modern JavaScript stack in the end of 2018, 2019 with WordPress five. They released Gutenberg. A new admin interface, as you will. So their back end is pretty modern. The front end is not that modern. But anyways, so from the beginning, when Gutenberg was published, we looked at it, both from a technical point of view and a UX point of view. And I have to admit, it's pretty impressive what they did. Both technically and also from the UX part of view. So it's not a bad product at all. Though one thing that there's one recurring theme when I try out WordPress every now and then or other CMSs. So I constantly have to feel when I work with WordPress or some similar systems that are not smart enough for this. And I started my first website in 10th grade. That was 1995. So I've 25 years of experience, most of it professionally. And I have a university degree in computer science, like comparable to a master degree. So I think I know a few things about web. But I still feel like an idiot when I operate those systems. And let me show you what I mean by that. So I usually set up a challenge for myself when I try out a new system. And I usually also screencast that to see where I had a problem. So the challenge is usually I take a regular page with some text and some images. So pretty standard. All CMSs should do that, even old school CMSs. So that's not really a challenge here. The challenge here is to see how easy it is for a user. And you can see here the page that I create. So images, you have an image aligned left, image aligned right. I uploaded just the images that I have, always the same. And you see they differ. So I want to change the upper image, the left image, to the same size as the right. So WordPress has this nice way of that you can drag and drop basically the image. But I'm not a designer. I don't have an eye if they're really the same. So I check the settings there. And they have a few presets. So small image, large image, thumbnail. And I try around a bit with that. But none of them really fit. So there's a second setting, maybe that, maybe 100%. Maybe I can toy around a bit and find the right setting. So I choose a bigger setting and then 75%. But that still doesn't do. So OK, there's a force setting actually. So a width and height. So I checked the other image, 300 pixels wide. And I set that to 300 pixels. Oh, the aspect ratio is broken. So I have to check the width and the height of the image to actually make it look the same. And if you're a web developer like myself, you will get through it. I mean, it's like CSS. And to further show you what I mean, is let's go a bit simpler. I haven't shown you a complex use case yet. So the simple form, just text. So this is how text looks like in WordPress. So it's basically a word processor. You can do everything. You have a few presets, like in Word. You can set the pixel size. You can even set the line height here. So you can basically do anything from seven pixels to 300 pixels. And you can even choose the color. So you can have something tiny that's maybe yellow and something that's super huge in red if you want to. And you have the option to do so. And the thing is what WordPress is mimicking here is it's mimicking a word processor and an image manipulation program. Because what was missing in the previous example in the image was that you can, in an image manipulation program, you can actually set the aspect ratio to a fixed setting. And then it will automatically adapt. So they didn't go the full way. They're still missing something before it becomes a full image manipulation program. And that's pretty impressive from a technical point of view, of course. But there's a problem here. The regular user of a CMS and myself included, I'm not a designer. A designer knows about typography, about rhythm, about line heights, and those kind of things. I know enough about design and typography to know that I can't do that myself. I know the theory. But when I try to actually implement that, it doesn't look good. I'm just not an expert on that. Same is true with UX. Readability is really important of websites. And I know the theory, but I know enough to pass that to people that actually know that, that studied that. The example that I show with the color chooser, modern websites needs to be accessible. There's legislation for that that requires you, for government websites or public websites, to be accessible. If you allow people to have a color chooser, they can basically choose any color combination that you might want. And that might break accessibility without them noticing. So if you provide something like that, you need at least an accessibility check that checks against the website that you're creating. But that's pretty complex, actually, and pretty labor-intensive. You have to show that it works responsively and responsive web design. And that's quite hard these days. Even web developers fail to check all the combinations. And those systems have means to adjust, and you can have a look at it. But to be honest, you really need to be a web designer to do that right. So in the end, what you will do, no matter how smart you are, you will tinker around. I will tinker around, because I'm not a designer. I'm not a UX expert. And I don't know everything about accessibility. So you will end up with a bad product, actually. And the reason for that is that the modern web is quite hard and quite challenging. Back in the old days where it only had text and a few images, that was easier. But the requirement in a modern web project is you need a consistent design. You usually have a corporate design manual. So you need a designer that makes the site look good. But even designers are split these days. You have experts for UI, and you have experts for user experience. And usually at Kit Concept, we have actually both, because they're different specialists for different things. So you want a system that's easy to use for low training cost. That's really important for large organizations that need to train people for their systems. It needs to be accessible. I already mentioned that for legal requirements. It needs to be responsive. It needs to be secure. It needs to be fast. So that all leads to a situation where a modern web project, from a decent size, is actually a team sport. It's not something that a single person can do, like back in the days when CMSs were introduced. So Plone 6 is not a classic CMS, because it's not stuck in the past any longer with an outdated text stack. We rebuilt it from scratch with a super modern state of the art text app. But it's also not a website builder like WordPress. It's not a tool for agency that wants to introduce a new bottleneck, and where you have to go to an agency to change stuff on your website, or where you will end up with an ugly or inaccessible website when regular users are using it. Plone 6 will be a modern state of the art CMS solution with a modernized technology stack, with a loosely coupled front and back end system. And it's actually, as far as I know, the first enterprise CMS that does that step, that provides both power and ease of use. So how did we get there? So I already mentioned user-centric design. Albert Casado, the designer who is responsible for the Plone 5 Barcelona team, he came up with a brand new UI framework when we were developing or starting to develop the new stack. And this new Pazanaga UI, or Kwanar UI now, focuses on simplicity and focus, which are basically the principle of good UX. So you introduce focus by removing settings that are not the most important ones. So you set the focus on the most used actions for editors. And the idea of Pazanaga UI is that we focus on everyday editors, because they're our main users. So what we did, for instance, with Albert, with the sidebar, is that he put there the three most frequently used actions in Plone, which is edit content, add content, and have a look at the structure. And everything else is hidden. This doesn't mean that Plone 6 don't have power user features. Power users are also important, but we hide that, because we want to focus on simplicity. So let me show you how we solve the image handling problem in Plone 6 and what design choices we make. In Plone 6, and I will show you the details later, we have two settings for images. One setting is the alignment. You can set it to left, right, middle, and 100%, and the size, like small, medium, and large. And that's actually 10 to 12 combinations, depending on how you count there. And this is quite a bit. When we usually give a website to a designer and ask them to do all the variations, like 12 variations is quite a bit already for just images. And as an integrator company, you can always add more if you have the feeling that you need that. But they should be part of the design process, right? And you should think about carefully what you actually need. And Plone 6 is flexible enough to allow integrators to add more options if they need that, right? So we don't restrict that. But out of the box, we focus on simplicity and keep the system simple and user friendly, that our users do not run into what I showed you with WordPress, that they're just lost. So Plone 6 will support modern page layouts. And it's all about blocks, blocks, blocks, right? I said that last year at the conference already, and talked about that in depth. And the core idea is that modern web layouts and landing pages are usually one column layouts, right? Driven by the mobile web and the need for responsive web design. So yeah, no site bar, no porcades, no design, so yeah, no site bar, no porclades, sorry. But we'll have a solution and a modern solution for porclades, right? So they won't go away, at least not fully go away. So let me show you a bit what I mean, right? I just took randomly three random designs from the internet to show you roughly what kind of website I have in mind here. And this is one of these. This is the WordPress 2020 default theme that they have. WordPress actually ships one new theme every year. That's also pretty impressive. And let us go through that. So at the beginning, you have a highlight block, then you have a few what we call grid blocks. You have another highlight block, a quote block, call to action block, and another grid block, right? So at Kid Concept, we have an intern that started last month. And he started basically from zero. Like he had no previous experience with web design, actually. And we gave him the assignment to recreate the 2020 theme on Plone, right? Because the 2020 theme is GPL, it's open source, so you can grab it, right? But the challenge was to start from the scratch and write the CMS and also create the blocks that might not be there, right? Or reuse the blocks that we already had. And I would like to show you the results. So I think you did a fantastic job there. But the thing is, that's not a perfect theme, right? But this is something that an intern did in a few weeks, right, from zero. And you see here that we have actually the same blocks. And on the left side, you see that it's actually a Plone side. So you can go to Edit here, and you can actually edit that, right? So this was just a showcase to show that we can easily do modern designs that WordPress does, right? There is no functionality missing, right? Of course, WordPress has like 60,000 add-ons, I think, or something. And we're not there yet, of course. But still, we can do it, right? It's doable. And it's not hard, right? An intern did that. So you don't need to be like a website pro with 20 years of experience. And let me show you in more detail how this works. So in Plone 6, you can build sophisticated landing pages and layout. And that's based on grid layouts, right? So we have this grid block. And actually, this is the block that we put a tremendous amount of work. Our Victor put a tremendous amount of work. I think it was like we had three or four implementations. And we start from scratch over and over again, because we weren't happy with what we had. Tiberio also helped us with that, with the infrastructure. And let me show you how this looks like. So this is the grid system. You can have between one and four elements in a grid. But that's also flexible. You can change that. You can have six or 100 if you want. Those are text grids. Those are image grids, right? This is roughly how it looks like, right? And this is not a beautifully designed overview page, but just a showcase, like what you can do, right? So let me show how the editing works. So when you create one of those grid page from scratch, right? So you choose here a grid block. Then you can choose the number of columns that you might want. And then you can freely choose the sub-blocks that you have here, right? Right now we have image listing, teaser, and text, right? We'll go with image first. So we choose an image. And when you have an image block in Plon6, you can choose if you want to upload a new image, or if you want to choose an existing one. If you choose an existing one on the right side, a nice content chooser shows up. You can then like drag and drop the blocks. You can add a new block. So the initial choice that you make is not fixed, right? So you can delete blocks. You can add blocks if you want. And you can also mix blocks here if you want, right? So I create a two column layout here. And here I add text, right? What you saw earlier in the scroll, all the combinations that you have here, right? And you can basically allow all sub-blocks that you want here, right? But we choose only to have the ones that make sense, because you usually don't want an image gallery or a highlight slider that's full width in your sub-block, right? So not all blocks make sense here. One other really important blog, in my opinion, that I plan six will chip with is a teaser block, right? A teaser is usually used to create landing pages. So that comes from new sites, usually. You have an existing article, for instance. And on the overview page, you just teaser the articles, right? And the content is already in place. So let me show you how that looks like. So you create a new teaser grid page, same as before. You create a grid. You choose the columns. And then you choose not an image, but a teaser. You can, of course, when you have a teaser, you're choosing an existing content object, right? But now, when you choose an existing content object, it will actually fetch not only the image, but also the title and the description, right? You can do the same thing what I showed before. You can move them around. But you can also override everything that you see here, right? So you get the title out of the box, like when you choose that. But you can override the title. You can override the description. You can even override the image, right? The reasoning behind that is that, dependent on the context, you might want different ways of teesering something, right? You can't assume that you have an element, and that's teaser in the same way on all subpages, right? Because maybe one department uses that in one way, the other departments use that in another way. So you need a way to override that, right? So this is what the teaser block is. And in combination with the grid block, it really allows you to create very nice landing pages. Plan 6 will ship with a few default blocks. I won't show them in detail. It's basically the basics that we have in Plan 5 already. So we have a text block, an image block. We have a video block that supports YouTube and Vimeo and MP4. We have a listing block, which is basically collections in a block that allows you to create listings. We have a table block, table of contents, a hero block, and a maps block for Google Maps. But I would like to focus on the more fancy and advanced blocks here. We almost have 100 add-ons for Plan 6 on NPM, right? So let me show you a small number of the blocks that we have. So that was too quick. So this is a slider block that we have. This is a carousel block. And they're actually from the structure quite similar, right? They only differ in the way that they display the information, because they're all based on the idea of teasers. So let me show you how this block works. So we have an existing block, right? You go to Edit, and then you see on the right side like a number of teaser elements, basically, right? You can add a new teaser element. It works exactly the same as what I showed you before. You choose an existing teaser. You fetch that, and the block will automatically fetch the image, the title, and the description, right? You can drag and drop that. You can move that around. You can overwrite the image, the title, and the text. And you can, as well, change some settings below in the right column, right? The number of elements that you have there. So the blocks all work the same, right? So users get used to the way they operate. So they only have to learn certain patterns once, and we're reusing that pattern, right? And those patterns are already in Volta Corp. So let me show you another block that's from the EEA, from the European Environmental Agency, and from all the web, which is working for them. And they open source that. They actually open source everything they do. And Nilesh, Alan, and Tiberio worked on that. And it already has contribution from Julia, Victor, David, and Alex. So lots of other companies are already using that. And that's really great to see that individuals and companies open source blocks, and then you see interaction on that right away, and you get, like, QA and improvements right away. So that's an accordion. I think that's nothing new, really, so you can open and close the paints. But I would like to show you how the editing works, right? So one cool thing is that the editing that is fully operational in edit mode, right? So you have the same thing that you had in the carousel and the slider block, so you can add new elements. But the new thing here is, so I add a new pane here with text. But the new thing here is that you can add an arbitrary block here, right? So you can have those accordion elements, and you can add text to one. You can add a teaser grid block to the other, and a listing to the third, right? So you have all the flexibility that you can imagine with the block system. And no matter which blocks you have, you can add them here, right? Same with the grid block, right? This gives us the flexibility in the end for the integrators that they can freely put together stuff dependent on the needs for the client. Another contribution by EA is the new search block that would ship with Plone 6. I guess you all know the defecitated navigation add-on for Plone 6. It's one of the most popular add-ons that are there. And with Plone 6, we will ship that by default as a block. That was contributed by Tiberio, Christina, and Victor. And let me show you how that works. So I think I mixed things here. OK, that's the wrong form. OK, give me a sec. Crap. OK, I will show that later when I have time. OK, the search block is really amazing, but I uploaded the wrong video. My keynote crashed. Sorry for that. OK, so I will do that at the end. And I will do that live. So next popular add-on is forms. You all know Plone Form, then, and Easy Form. It's one of the most used add-ons in the Plone work. Nikola and Julia from Rattl worked on an add-on product that does exactly that. A lot from KitKonzept and Janina from Workbench also work on different implementations. And that happens every now and then that we try out different things, how they should work. And then at the end, things come together. But Plone 6 will ship with a forms framework or with an add-on. So let me show you how this works. So you create a new contact form, for instance. So let's keep it simple here, because I want to show you many things. So you create a new form here. The form is a block. Everything in Plone 6 is actually a block. So you have a few settings there. This is basically the email that is sent to whom it sent, like what subject you use, and all those kind of information. Here you see what kind of fields you have. So basically the fields that you have in Plone Form Gen as well, so text, email, whatnot. And we're creating a simple form here. So we are creating a subjects field, which is a text field, and a text field for the message, actually. So you can add an arbitrary amount of fields. And the cool thing here is that because that thing is a block, you can freely combine that with other elements. One recurrent problem in the classic Plone setup was always that clients were asking, hey, we want something on the contact form. We want a more elaborate contact form. And then you can either have text fields in your form content type, and that's not as flexible as you want. Or you have to build something that the client can't really edit. And now with the new block space form, you can just combine that wherever you want. You can even have multiple forms on one page. And that's the kind of flexibility that we never had before. So that really shows the power of blocks, in my opinion. Another thing from the EA and something that Ellen built, and this has no content, I call them no-code content types. And that fulfills actually a promise that we made or tried long time ago. Asco worked quite a bit on finishing the story that you can create, add on products and content through the web, and also templates. But the missing part were always the templates that we have. So we had lots of flexibility. There was always a cool feature of Plone that you can create content types through the web. But the missing piece was always templates. But not any longer. So let me show you how this works now. So you go like in Plone 5, you go to the site setup. You choose the dexterity content types. It lists the dexterity content types. And you create a new type. So that's nothing new. We had that in Plone 5, even in Plone 4 with dexterity. But then you can go to the layout template. You have to enable the editable blocks here. But then you basically are in a template mode. So you can create a template here. So you can override the title, for instance, and create a helper here. So we create the book content type. And the book has a title, of course. And the book has a title, of course. And it has an introduction. Should have an introduction here. And this should be a required field. That should always be there. So we require the users to have that field. And we also make it so that it can't move. Because sometimes you have blocks that can't move. And the next thing is an image then. So we want to have the cover of the book that we're talking about there. And should also be fixed. So you give that page a structure. Think about events or news items. They're just in the end templates. Because a news item is not different from a page, actually, except that it has a lead image. And you can easily create that within five minutes with that system. So you store the template. Then you have the new content types you create in your book. And then you have the template here. So you give your users like a helper to make it easy to have content objects that are reusable, basically. And that's a pretty cool feature. So as I said, Plone 6 will be all about blocks. We have plenty of blocks. And I could only show you a few things. More than 100 add-ons, almost 100 add-ons on NPM. And we plan to open source lots of new blocks in the next weeks and months to come. We gathered here in the rental to discuss actually which blocks we want to have in core, for instance, or which blocks we might put to a collective. And how Plone 6 is going to, in what shape Plone 6 will be in the details. But there are plenty of blocks around already to date. So Plone 6 is really ready for prime time. We shipped the first website with the stack that will Power Plone 6 in 2019. And Volto the new front end and this stack is used in production since then. So let me show you just a few examples. One of the highest profile government websites in Germany is actually running on Plone 6 these days. Multiple universities in the US, in Europe, and in Japan already today use Plone 6. The Humboldt University Berlin has two websites. The University of Minnesota has a website. There are plenty of websites in Japan that are powered by Plone 6 today. One example here is Osaka University. It's considered to be one of the top three universities in Japan. It's kind of an Ivy League type university there. And their main public website now runs on Plone 6, thanks to CMS.com and Manabu. So let me show you that, that you also see what Plone 6 can do in terms of the layout. You can see here that it has blocks. I mean, you can't really tell that it's a Plone 6 site, which is, I think, a good thing, in my opinion. It just has a really modern block-based layout. It could have been a WordPress site. You don't see that from the outside. You have no clue. But Plone 6 can do that. And it's really an impressive website, I think. Plone 6 already powers internet solutions with almost 7k users. One of the largest research institutions in Europe is using the internet for Plone. Let me quickly show you how the front page looks. So we have an introduction slider. We have events, then we have a highlight block, Grids block. We have listing Grids blocks. That's one of the last tools are special blocks for this client. And soon, this research institution, we will currently working on their public website. And early next year, we will publish that. And it will be a huge website with more than 100 subsites for institutes and projects. And there are more projects in the pipeline, actually. Sorry. I messed that up. Maybe now it's time to dance after all. But you can watch the site again. It looks really nice. Yeah, and that one as well. Kino won't allow you to skip the animations. OK, the next website that I will show you comes from the European Environmental Agency. As I said, they're heavily contributing to open source and to VoltaCorp, the web is working for them. They published like multi-million euro contracts in the past years. And luckily for us, they were given to companies that are heavily invested into the belong community, or the web and code syntax and others. And they're open sourcing everything. So that's really a great contribution. And lots of websites are powered by that. So all the web built the force information system website, force information system for euro website. It has beautiful data-driven blocks to visualize data. And that also really shows the power of a modern React framework that you can use all those fancy libraries. And the cool thing about this is that the blocks are all data driven by remote data. So you can fetch the data and show it. And it's really amazing what they did. And it's all open source. So you can check it out. Yesterday, Piero gave a talk about how Red Turtle built a solution for municipalities in Italy. This is the public website of Modena that you showed yesterday. And they have 20 websites in production and 50 more to come. So they really built a very impressive solution here. You can see here video blocks, grid blocks. And you see the recurring theme. So I talked a bit about that we're using the Plone 6 stack like today. So what I mean with the Plone 6 stack is that today you can use like Plone 5.2, which is battle tested and ultra stable. You can use Plone REST API, which is in core since version 5. And you can use the Volto front end, which is used in production since 2019. So you can start a new Plone 6-ish project like today. There's no reason to wait. We're all looking forward to Plone 6 to have it, the new big release. But the stack is already there. And it's stable. And this is something that we never had before as far as I know. We usually put out an alpha version, and then you had to wait a bit and go through that. You can start with that today. You can also use Plone 6 alpha if you want. But you can also stay with Plone 5.2 and then switch later. So you have all the options, depending on how much you are involved and how much risk you want to take. And Plone 6 is also here to stay. The Plone release and security team decided that Plone 6 will get full support of five years to come after the release. Plone always supported versions way longer than most other systems that are out there. Drupal and other systems take a look at Python. Python has three years, Node also three years for their major version of years that they support this version. And then end of life happens. Plone 3 was supported four years. Plone 4 was supported 10 years. And Plone 5 so far is supported six years. So we did a pretty good job in the past, but we failed to advertise that properly. But this time, we will improve that, and we will make the promise to all users and to the community that Plone 6 will be supported at least five years, most likely longer. When you're not ready yet, for whatever reason, for Plone 6, no problem. We also got your back. Plone 6 will ship with what we call Classic UI. It's a modernized version of the Plone 6 user interface that you're used to. Together with the LTS that I also talked about, this will give you five more years to pick up the new front end and to adapt, to see if that works for you. And we know that we have large institutions that heavily invested into Plone, that invested lots of money into Plone that just need more time. Universities are a good example. They can't just, because of their size and their infrastructure, they can't just jump on something new right away. So we got your back. You will have five years to adapt, and there's no need to rush. But I hope that today I showed you why it's worse to actually move to Plone 6. So let's wrap that up. Plone 6 will be the first enterprise CMS that combines the power, features, and stability of what enterprise users need with the ease of use of a very simple system. Plone 6 will be a perfect fit for large institutions that build internet sites and internet solutions with lots of editors, where training costs are important. They need a system that's easy to use so they don't have to put lots of effort and money into training costs. They need a solution that is scalable and that works within the existing enterprise infrastructure like LDAP and authentication and lots of other things. Stefan will give a talk after mine about a client project that we did for Helmholtz and show you how we integrate that. And that's a big asset of Plone that new systems on the market do not have. There are plenty of systems that are new and that use a pretty decent and modern stack. But they lack the stability that we have. And Plone 6 will combine the power and the flexibility that we need and the control that we also need. And I sincerely believe that with Plone 6, we have a very bright future ahead because we're giving something to the open source world or the IT world that's not there yet. Plone 6 will provide something really unique. Thank you. Teemo, thank you so much for that keynote. Really great to see you. Show the search block. Yeah, you still have five minutes. We are not hearing Andy, so yes. OK. Let me quickly do that. There you go. That's the search page. You choose a search block and it will list all elements on the website. On the right side, you see criteria. When you're used to Plone 5, think about a collection. That's a block, actually. So you can add filters there that you want to, for instance, only have the example pages. And it will list the stuff. And you can search there. But in addition, you can add search facets here, as many as you want. So let's add a type facets, for instance. So you can choose the content type. You can choose the appearance of it if you want a checkbox or a select field. And you can also add a label. You can also add a second facet, like, say, the tag, subject field, or keywords. Let's choose a different widget. Let's choose checkboxes. And then you can even choose if you have multiple choices or single choice. And it will adapt and show radio buttons or checkboxes, which are the right controls for that. You also have a few advanced options. So you can choose if you want to show a sorting option. You can put a label on the sorting option. You can even restrict the sort options that will show there. You see here all the options that you have to sort something in-plone that comes from the collections. Usually you don't want to expose that kind of power to your end users. You want to expose that to your editors, but not to your end users. So you can restrict that here. You can decide if you want to show a search button or if you want to have an auto search, and you can also choose variations. So you can choose if you want to have the facet on the right side, on the left side. And that's all configurable. As an integrator, you can also remove those settings. You can also change the results template. So you have the summary view. You can even have an image gallery there. And then you can save. And then you have a really powerful faceted search user interface. No solo, no elastic search. Just based on the Z catalog. So you can search. You can sort on the two settings that are set. You can reverse the sort here. Here you can choose the content type that you want to choose. You can clear the filters if you made a mistake or want to go back. And below that, you can choose the text that you have. So you have a really, really powerful search interface here that you usually, back in the pot, you have to put a lot of effort into that. And that comes by default with Plone 6 Core. Thanks. Timo, thank you. That was fantastic. Plone 6 looks to be completely amazing. You guys have put in a lot of work. And I think the community is really appreciative of everything that you've done. And I'm actually pretty blown away by how amazing Plone 6 is. So thank you very much.
Plone 6 is the first CMS on the market that combines the power features, best-in-class security, and scalability of an enterprise CMS with a state-of-the-art JavaScript frontend and an easy-to-use user interface that gives editors full control over the composition of pages.
10.5446/56666 (DOI)
All right, hi everyone. Welcome to our next talk. Our speaker now is Kim Pauluson. She's a longtime Plone Developer for the KU Leven University Web Management System. As an active part of the community, in fact, she put in a nomination to be on the board for the next term. So find out later today how that turns out. Kim is going to talk about playing with blocks and cards, mosaic, valto, and all that good stuff. So go ahead, Kim. All right. Thank you, Chrissy. I already messed that up. So the talk is about playing with blocks and cards. And not like you would think it's not about all the blocks, blocks, blocks that Timo has been mentioning, although we're going to get to that at the end of the presentation. What this talk is actually about, because blocks and cards is one of those words where you actually know nothing when I tell you about blocks and cards, is basically what we use in our university is bootstrap cards that we want to have for creating flexible content listings. And this is a bit the story of all of those blocks, overviews, I'll call them in this talks, through all the Plone versions. We'll get to that later. Just as a reference, if we would speak of this in volto terminology, it's about grid views, no grid blocks. And then the bootstrap cards are the teasers, as they are called in the volto landscape. Just so I confuse you a bit more about all the blocks and cards and grids and everything else. But the picture says more than a thousand words. So that's what we're talking about. This is, as I said, going to be flexible content listings with cards in Plone 4, 5, and 6, and a little intermezzo on hacking. If you haven't seen these before, there's a whole YouTube rabbit hole you can down to with new pro hacker gods, like all different versions. So the, how should I say, we're going to start with the easy stuff. So Plone 4 and move along to the at least better stuff to the end, which is of course, Plone 6. So that's what I just said. The new that we see here are the blocks, overviews that we made in Plone 4. There's a little intermezzo where I show a little dirty hack to insert the different blocks used inside Plone pages than the blocks overviews in Plone 5 and in Plone 6. So it's a long story. I'll try to make it brief. The presentation took too long when I practiced it. Block overviews in Plone 4. We made a little, well, a little big MyKLivu project. We redid the whole KLivu University internet last year. And one of the use cases was to show pretty news items. So depending on who you are, when you log in, you get a little overview of the news items that are specifically relevant to you based on the information they have for you. But in our case, it was yet another news overview. It needed to be pretty. It needed to be in the house style. And we had things that were similar. We had like one or two listing views that sort of resembled nice little blocks. And then we had to make a third one. So what does a developer do with all that redundancy? He automates that or she in this case, I should say. So that's why we started thinking about how can we make it easy for users to show little blocks of all the news items in a page or just on any kind of overview, like folders or collections, and make it easy for the users to switch between those layouts because depending on your content, you might want a different overview or more or less columns or the image on the left or the image at the top. And there's like a gazillion combinations we could make in the existing house that we have. So obviously, it will be easier if I just show you what I mean. If the internet works. Okay. So this is our generic demo site that I make with all my animals. So this is the idea that we make an overview. This is automatically generated. As you see, this is a collection I'm blown for. And we have this is now a big one, a blocks view selection to do that. I'm going to head over to test that I made up so I can play with the display without destroying my nice little example. So as you can see, this is, well, I don't know if you can see a fully automated overview with blocks. And this is actually a folder. This is an image. These are news items, which you can say publication date or not. And this is an event. And these are more images. And what we did was we made the blocks view available to the end users. This really slow where you can, well, well, your user can see little preview displays on how that specific display would look if you choose it. So I these changes, the images are still full size. Now let's see if I, for example, change it back to you. Always nice giving a presentation when the internet is not healthy. Live demos. So this is another view that is these are all we have a specific house teller as you can see. So they're not default bootstrap cards, bootstrap cards would be much easier, but we had to account for all the different left and right and top versions of the cards or teasers that we have. As you can see that there's a little, I want to say AI, but it's not that smart. But the card knows how many columns it is being displayed in and it will display an image, for example, here, because if we would do that over full width in this display, it would look like nothing or you would just see ear or something. So that's basically the functionality that we were going for. And this was added on each folder and each collection. So basically on all the folderish content types. We're going to quickly go back here and also demo a bit of the code, because that's all very nice that it works, but probably some of you are wondering how we did that. And the nice thing that, well, that the thing I enjoyed most building this was, well, that the users could actually use and choose their listings, obviously, but it's actually a couple of files in the browser folder in the K&U package, well, the clone package. It was made with the clone CLI. Thank you, Mike, that works very well. And then we just registered blocks in this case is actually your grid. So it's the number of columns that you want your blocks to be shown in. And the cards is the different cards that an image can have or an image item can have. So just to show you, so it's based on content listing, not really that important, but for people still using Plume 4 like our university at content listing is content listing view that is available on all older content types. So we could just register all of these views for the IAPulmage interface as well and reuse some functionality from the content listing. And these are all the views, but as you can see, it's a lot of ZCML registration, but these are all available views. And actually, in the preview, we only show six, but we actually have about 16 variations or at least 12. We only show the ones that make sense design wise. So a lot of registration, but what I'm going to want to show is the blocks by and that's the cards will be just about the same. You have a base blocks view, which needs to do some magic with the butchering because you can also have on a fully many items. So you need to be able to do pagination. And then, as you can see, it's a very nice object oriented, all we have to define the columns as a class and one or two properties to say which garden wants to use and how many columns it will have. That was really, I thought I'm going to have a whole lot of work, but adding a view is basically adding two or three lines. And the card CCML, I'm not going to show that's the registration of all the little card views. We provided little teaser views, which just the plain HTML version of any content item. And then make that specific, like the image that has a different view in the one column than in the one more columns view. So the card is the same. You have a base card view. And then we have a config file that holds all the card classes, image classes and content classes. This is obviously very specific to the design we have at the university. It's basically that you have like three divs inside each other and each div needs a different class to say where the image goes left or right and things like that. So that's what the helper classes do. And then we have a base news item card view because news has publication date and an image and you want to have it make different decisions on that. But here is the same thing again. You have a class that inherits all news classes, new card classes, inherits from the base card view. And you just add one property that says this is where the image goes, or this is the card type in this case. And then for the image is the same. It needs to, well, it just needs a different URL because you need to go to the view of the image on the news item. You need to go to the item itself. So those little tweaks are covered. And same for the image card. All we need to do is register a new class and set one property. That's it. That's the whole thing. And that generates all that. I'll just quickly show the config that is not here. Just to show you that it was made, or K-LiveHouseTel is based on Bootstrap 4 at the moment. And all the configuration, as you can see, this is very specific to the design system. But this is, as you can see, the column classes are just plain Bootstrap classes. And they just get triggered depending on which block stream you choose. All right. That was that. That was the main, well, that was actually not the reason that I had to create the blocks views because, as I said, it was made for the MyKLiveHouse and the MyKLiveHouse card. I will also demo having the blocks and the cards views as base views in the package that was available on our setup. It was actually very easy to then also reuse all of that functionality in a different package, the MyKLiveHouse package. And all we had to do there, well, it's still a bit much. We still had to override the PY and the PT because it needed a little logic. I will show you. Really don't want to help me today. So this is the home page. And this is my customized news. So as you can see here, these are also the same cards, but in a different style of it. But the only thing that's different here is, for example, we could add the tag. Specifically here, we needed to have a different URL. It's not just the subject that goes to the subject line. It goes to a specific theme or channel. It's very specific. But as you can see, they have links and they then go to the specific category or subject that news item came from. So you can always get back to something that is very specific to this MyKLive application. But everything else of the logic of displaying and showing publications that hold on in the basic console. And we haven't explored it further because obviously we're migrating to Plone 5 and we want to look at how that works in Plone 6. But it would actually be possible to extend those cards like we did for MyKLive. For example, for the search, where you have a little more specific card that shows you the list of subjects with the link on it that shows you when it was published in which folder it is added. So those are all little tricks you could add to card overrides in your own packages. I'm going to quickly go over this. This is Mr. Hacker. Do not try this at home. You're not supposed to. We found a way because we have the at card view and the at blocks, one call, whatever views. It turned out to be really easy to add that inside the page. We haven't automated it yet in Plone 4, but there is a way. And that's actually with the basic HX call. Again, not opening my browser. This is a little documentation page I wrote for our team inside. It actually allows you to include a card or a teaser or a whole collection from a completely different Plone site than this one. That's why I say don't try this at home. It's a security thing. We have our course setting specifically set up. You're not really supposed to do that unless you trust the people working on it. And that's all secured where we are. I'm wondering how I can show you. This teaser, it has a title. And if you see, I don't know if you can read my status bar, but it clearly says, new scale of the A. So that's the new site. And that is not, as you can see, we MS demo. Obviously same domain, but different subdomain. But for us, it means that it's, for example, possible to show the news of one faculty inside of the website of the new site or news from the new site on the departmental homepage, things like that. I'm just going to quickly show you how to do that. It's probably bad showing how to do it, but there's nothing wrong with it. It's an Ajax call. Just know that it probably won't work course-wide on a lot of situations. All you have to do is add a diff inside a page where you want that overview to appear. And then you do a little document ready function and you say, load this URL and use the cart doc view. And the same for the blocks views. You can just say, this is the URL of the thing I want and insert me of that collection or folder. It has to be a folder collection for the blocks view. Something with two columns where the image is 25% of the width. And then, of course, you need your CSS selectors to specifically say, in this case, for example, you can just say, get me that content, show it in this view and show me only the first six divs of that thing you have there. So that's all magic you can do with Ajax and a little CSS or jQuery selectors. Okay, that's for hacking. You don't watch my time. 15 minutes left or so. Blocks and clone five. Obviously, we are migrating to clone five now. And this is one of the first things that we wanted to have fixed. The good news was that we could actually just migrate the code as it was to clone five without any changes because it's plain ZCML registrations. It's very basic views. We didn't even have to fiddle too much with the batch. It just worked. The only thing we had to change is there's no more at content listing. So we had to change all the iFolder-ish interfaces to the registration for both the folder content tag and the collection content type. I'm going to switch the demo because I want to show the Mosaic version. But then we thought since we just had to show you that little hack to get the blocks used inside the pages, we thought, but hey, clone five, we can use Mosaic. And Mosaic should make it possible to just drag and drop a block into a page using that view that we've already made. Now, obviously it wasn't that easy. So we did have to, well, I have to say thank you, Michele. We have a bunch of new guys and one of the new guys actually wrote the code for that without any assistance of me for writing Mosaic tiles. You did have to completely rewrite what I did for the blocks and cards in clone four to make it work in Mosaic. But the idea is basically still the same. So what we did was we added the option to show the blocks version of a content listing on the content listing tile. So now if you choose a content listing, a content listing tile, you can choose just like before the different blocks used. And it's unfortunately not with the fancy previews that we had for the folders and collections. Just so you know, this is the same blocks overview. See, we still have blocks view on the document. And as you can see, they have a nice collection of the cage images implanted here. So this is the end result of showing a lot of content listing tiles with the blocks view implemented on them in a clone five site. Nicholas Cage pictures provided to you by Zopix Ipsumplone. If you, Zopix, thank you, you made this little make a bunch of documents, images and files in the site. Like you can see all here. And the images are of Nikage. So I added these here and I will just live demonstrate hopefully how you can now choose the blocks you want to come to. Find the edit button. If you work with too many pulverings, you get confused and we have to click on the right side. And this is a known issue. I always tell my guys that I need to fill up space while waiting for browsers. Luckily, I usually talk too much. And now obviously most equal fail on me. Oh, good. We're here. Now I want to move the zoom window away and insert more a content listing. As you can see already here, there is a one column thing. That's what I was waiting for. So well, this is actually fine. This is just document one and two. Anything here, obviously you can adjust the collection just like you would. Let's say I will only want to in this case. And this is the thing we added to the content. So we did have to register obviously all the different views and make sure that they would work on the content listing. So these are the same thumbnail views you have on the folders in the collections. Let's say three columns. Three items in three columns with the image at the top. And you can already see it here. Let's move that here. And voila, there you go. We have a content listing tile. Show nice teasers or bootstrap cards or whatever its name is these days. With the images, if it has lead images, because obviously these now work with the lead images that are provided in Plum 5. And that way you can basically add a pretty content listing to your site. Well, that's the other ones are pretty too, but these are pictures. One bigger picture. Okay. And then the code. Just to show you that it still follows sort of the same things. Obviously, this is the K-Liver tiles package, which is all our tiles that we made to be used on Mosaic. And same story here. Also package made with clone CLI. And all we had to do was add a little configuration in the browser folder. So these again are all the block shoes, as you can see. And now, as you can see, they are made for the layer standard tiles, content listing, iContentListing, tile layer. That's actually all you have to do. And of course, make a file on the template. So this is same as before. We made a couple of helper classes to figure out all the different card classes that you need and specific stuff for the images. Made a base block shoe that handles some publication date stuff. And then it's a bit longer than the Plum 4 version. But as you can see, it's a class that always has the same methods to define all these things. And depending on which queue you want, it returns the classes that it needs for those specifics. So this will be card image left and 50-50%. Again, specific to our design system. If we would have to make this for bootstrap basic cards or something similar, the code would even be a lot shorter. So it is a lot of repetition, but it only changes the things that needs to change for all the different box shoes. And this registers all these different views that you see in the drop down in the Mosaic tile for the content listing. We have forgot to show the PT, but it's sort of the same. Basically, all it needs to do is show the HTML for one card or for one column. I'm looking now. So this is basically just showing the HTML for code for one card. But I forgot to show because I don't think I have it here. Basically, on the object in the listing, if you want to call a card view, you just have to call at card or at cards left or things like that. And if you want to create listing PTs, it's actually just loop over all the objects and for each object's goal at whatever card you want and it will generate a view for you. Okay, almost to the end. God mode for the ones still following Minecraft. It's actually not just photo. Spoiler alert, no demo. I really, really meant to make a demo for this talk, but I really didn't have time this week and I only just asked Victor what the possibilities actually were in Volto. So I didn't have time to do that. And also didn't manage to do it in the classic UI. The good news is clone six ships with a classic UI, which is awesome because like I've heard in talks before, where a university would have to do major migrations and it might not be feasible to do that quickly to move everything to Volto. So for our content listings, box overviews in the classic UI it should actually be very easy to reuse the code that we have. Again, just CCML registrations and the Python templates. And from what I see, classic UI uses bootstrap five, so which means we could reuse all the column classes. I really don't see any issues. So if I do a demo, I will post it somewhere. So you can see it. So for the UI, that's so very nice. But as we've heard this conference many, many times, Volto is all about the blocks and more blocks and more blocks. So we love it. I actually asked the Volto team and Victor said it should, well, it's actually very much known to them that a listing block and every other block they have probably is very easily extensible because it's basically React components. And you can override that in any way you want. So to create that from the listing they have by default in Volto, it should be very easy to make the bootstrap cardish-like version of that. As I said, I haven't done it. I added a slide here just with, I googled for like five minutes. And this is what came up in the first five minutes, like, oh, there's like training documentation. Thank you for looking at that. That shows you how to do overrides for Volto content blocks. It shows you how you can use a listing block by default in Volto. There's Volto developer documentation on block variations. I'm not sure if this is the correct documentation. So if someone from the Volto team tells me it's wrong, I will adjust it so it is a better listing. I'm just checking to see if I can just quickly show you. We're almost near the end, I think. I have one minute. But yeah, you can go to 6.demo.clone.org, I think. And as you can see, this is the default listing block that comes with Volto. And that's basically all you need. All we need is basically nine out of ten times title description image and just override, I would say I'm blown the PT, but whatever the Volto equivalent is. And that should be very easy to override. If I manage to do it, I will let you know. Ooh, how can I help? This is the last slide in this presentation. Reach out if you feel this would solve a use case for you. Or let me know if you think that's interesting and want to, I don't know, look at the code to use this in your Plone 4 or 5 setup. I think there's many options to improve this. We could make a collective package, we could make an add-on that makes the pretty overviews for the listing block in Volto, or we could document how to do this in Volto. So if there's anything like that that would seem useful, let me know. Last slide. This is who I am, where you can find me, or just go to the code site and look me up. You will find me by name or nickname, spirit of ID. And that was it. Thank you very much. Thank you very much, Kim, for coming and speaking today. It was interesting to see all of that. Next, we will join in Jitsi. There's a link in Loud Swarm. I'll put it into Slack. If you have any further questions for Kim, go ahead and join there. Thank you.
For our new intranet we had to create yet another set of views to display listings and pages and news items. This caused a full refactoring of all listing views used in the university's CMS, with some very nice features for our end users. For our Plone 5 migration we also migrated this functionality to mosaic tiles.
10.5446/56667 (DOI)
Alright, welcome back to Plone Conference 2021, Track 2, Day 2 with our, I believe it's our last talk today. With me is Flip McVadden, who has been at Radio Free Asia, a Plone developer for 10 years, working on many sites at RFA. He's the author of several Plone add-ons, as well as contributor to Plone Core. It's been fun to get to see Flip periodically online, although, I mean, originally in DC, I think that's the first conference where I met you. But with us today, he's going to be presenting Plone output filters and transform chains. Please take it away. Hi, welcome. This is my name is Mike McVadden. And thank you, Kim, for introducing me. Let me go ahead and share my screen. And there we go. I hope that all looks good. Yes, and this I actually like threw in portal transform chain at the end, just in case we have enough time to talk about it. But really, I do want to focus on the the Plone output filters. So first, let me just introduce myself. I am Michael McVadden, known as Flip MCF, and I work for Radio Free Asia. Radio Free Asia is a news organization, nonprofit news organization, to bring news into closed countries, like mostly China and Tibet, Vietnam. We also run also been our news, which brings it to Southeast Asia. But mostly Radio Free Asia is where I got my clone, my clone introduction and where I've worked on all my clone shops. We're also in the middle of a transform of our logo. So this is the new logo we have for our site. So let's just jump right into the problem. So what we have is a database full of stuff, mostly like right here, we're going to talk about the text fields, or maybe just our entire clone response to the to the browser. And we want a way to hook in and kind of change the HTML. Or I think one of the greatest examples was links. I've been a past life I worked for an organization that had link IDs on every single link. And the search engine optimization team said you can't do that. But the tracking team said we had to. So we provided Google, you know, we just ran our entire web page through something that removed all the link IDs from all links and presented that Google bot while giving everybody else, you know, but that that was long. That was in the 90s. Now we have things like the tiny mce images, you know, when you put an image in the tiny mce, the safe HTML transforms mosaic blocks. And that's all done, you know, by intercepting a request or response, doing something with that response, and then sending it on downstream. And that's what we want to solve. So there are two ways that clone solves this. I'm going to see if I can talk about both of them. But I'm going to focus mostly on output filters. If there's anything you can take away from this right now, it's that clone output filters deals with fields, specifically fields with mine types like text fields, rich text fields. Anything you probably going to put a rich text widget on. If your field has a mind type on it, it's probably going to use clone output filters. And then there's clone transform chain, which after the entire templates assembled the browser views rendered the templates rendered, everything's done and we're just about to deliver it like Z publisher is done. We're going to grab that response and do something with it the entire document. That's a little bit easier of a use case to deal with actually implone so we're going to focus on output filters first. And one of the most obvious use cases that we're using for clone output filters, I guess a little bit blurry, but we're going to do is create an image here. And steal a caption straight from Wikipedia. And upload our image. And we have now created a an image in our classic clone install. And then what we'll do is we'll go over to this new story and add this image directly into our text field. So this is not new. This is stuff we've done all the time. What's going to happen though is when we wander around here and actually find there's my image. And notice I'm not going to fill out anything here. I'm just going to check this box saying use the description from the image. And we insert it and just play with your, your editor. Now, there is no caption here on the editor, but once you save it, magically, the caption comes from the actual image. So that is not stored in a text field, something magic happened there. And what actually happened there was clone output filters grabbed that text field value and did something to it. So right here, I'm going to run through my soap debug prompt or my, my, my Python console and look at the story and say, what is this text value? You end up with this. Hey, I'm a rich text value. You can, you know, if you've seen this error message before, good job. You are using clone. And if we look at the raw value of we can see that all we have is an image in here. This is a little bit of a different, I didn't want to put the full text of that example on here. So this is my adventure time story. And it has just a picture in it. And that's it. This is the raw value that's inside it. But once it goes through a magic output filter, you end up with this really nice wrapping of a figure. And my image gets linked and I have a fig caption. And something happened between the, the, the raw text and the output text that, you know, it's really cool. Thank you, clone. What are you doing? So that was like kind of the first questions I had when I first approached clone. I was like, there's magic going on here and I don't like magic. I need to know how every single thing works. I don't know a CD about that. So we found out, of course, you know, some of you know, maybe some of you don't know that this is clone output filters, the captioned image. So the nice thing about this and clone output filters what I really like about how this was done is they made a browser page, a browser view just for the caption image. So if you don't like that image template, and as you saw that the caption text is, you can't really make a difference between the caption text and the, the actual text of the story, like there's no difference there. You maybe want to style that. Well, you can style with CSS, but if you really want to change the template itself, the template is right there captioned image dot pt and a simple J bot override will do you most of the time. So other advanced more advanced usage is to actually override the caption image view which you can do with overrides at CCML or browser layer is what we use at RFA. Because we actually put a new field in our images called copyright. And we need to show the copyright with all of our images so we actually have a new view that actually takes the copyright puts it in the template. Moving on with that, you know, let's let's talk about like ways we can actually use the output filters. So what happened to us is we had the request come by that we want to be on Google amp RFA wants to be on Google amp. Let me just skip forward. You know, what is Google amp. Google amp is a special web page for Google. You can find it on your mobile phone and you do new searches we are a news organization. And we put out news stories. When you search Google and you click on that news tab, you get the results and there's Radio Free Asia coming up as number one result for we eager. And when you click on that, you end up with this other view. This view is not coming directly from our website. This is actually a cashed copy inside Google, which is Google amp. So it's basically the same. It's our story. It's another view. Think of it like that. But we have to write our HTML a little bit differently. So if we want to caption image in this amp view, Google you read the Google specs on amp and it says, well, please use the amp image tag. We're not going to use the image tag. We're going to use amp image. And so now we got a problem. We have to provide special captions to amp different than what we're giving to the rest of the world. Thank you Google for making our jobs hard, but we get paychecks now. So what of course the first thing we approach with this now forget about output filters is just we need a view. So right here we just go quickly. We set up a new view on our story that says I'm the amp view and I'm going to use an amp template. That solves pretty much most of the problem we have now a new view of a story that's for amp. But the actual caption image is done a little bit differently. We're actually going to change the caption image captioning. So when we get that caption or we get that text field from the story, it's going to use a different template to render it. And the way I do this is I don't know if this is right. And I would love to be in a jitzy and ask about this part because I'm going to use a browser layer but I'm going to actually going to tag going to make my own request. I'm going to tag. I'm going to say, hey, I am an amp request and then register a caption image. I'm going to use a template and view for that Google amp request. And what that does is it's when the request is tagged with Google amp. It finds the layer. It finds the name. Plone output filters caption image finds my overridden one and uses my new template. So everything's fine. Yay. We got different templates for caption images one for amp one for the story. And then we hit a problem. We have a cache and I didn't know this cache existed until I started playing with this. So what's going on is whenever you do a transform on a text field, and this is we're doing text fields. Capsules have a default output mind type that my type is text x HTML safe. And whenever you ask for it, the first request gets to transform the next request doesn't. So what's going on is that if I request amp page amp view first, that caption image is cooked and given to everybody else and vice versa. If you hit if you restart your server, and you hit that desktop story page, then you go over to amp amp starts complaining that your HTML isn't right, even though you put all this hard work into a new caption. So how we're going to get around this. And because I had to go out to the phone community on this and say like, what is the best way to approach this and I got, I got the answers create the new mom type. So what we're going to do is well, here's the schema for the text field, by the way. Just to go into a little bit of what portal transforms is actually doing. When you use this is taken directly from the risk text behavior in dexterity. The mind type is text HTML text HTML safe output. So when we actually asked for our field. We're going to get a x HTML safe underneath the covers when you run story dot text output or just call dot output you are actually calling the transformer inside. So that's what we do. So we're going to get a different product portal transforms and this is actually what's going on under the covers. So if we ask for a different mind type, we're going to get a different cash key. And that's what we do. Again portal transforms. What is portal transforms it's a system from transforming from one type to another. And there's a screenshot of maintaining it in the ZMI which is. But it's there. So how I've already covered this how point output first works and this is hijacked directly from the documentation is it creates the new mind type x point output filters HTML. A transform for it, which basically does your image captioning. The null transform and then a transform policy which is the important part and it tells product portal transforms that if you want an x HTML safe. You must go through output filters. And that's how clone output filters has hooked itself in into portal transforms it's actually hooked itself into the x HTML safe transform. If you want to get around output filters ask for something other than x HTML safe and none of this will run. So mostly we're just defaulting on top of defaults on top of defaults. I don't recommend unless you know what you're doing getting a different mind type and x HTML safe, but that's all we really needed to know about how the output filters is working to get ourselves a little bit of a different behavior a little bit of different just avoid that cash. So what we're going to do is just create ourselves a new mind type x HTML safe for amp, you know just come up with a name is. That's all it is. And now when we get our text through our view, we're actually going to manually call the transformer. And instead of going just text output. We're going to get that text and throw it into the transformer and ask for safe for amp. And now we have two cash copies. Text HTML safe is giving our normal desktop view for everybody and safe for amp is giving our rendered text. So, and their cash still we didn't have to turn off the cash which is kind of bad idea, essentially for a and a wire this up, you basically just use the blown output filters pattern, they've already figured it out for you. You just repeat it, a new mind type, a new register transform. And then a new policy that says a actually safe for amp must run through HTML blown output filters. Same thing is how x HTML safe is set up. It's the same thing, or we're doing is creating a different key for it. So, I'm just going to go ahead and take a code tour if we wanted to. I don't want to really, I can do that in a jitzy. I don't, I don't feel comfortable just switching my screen around right now and do that. But now that we've covered how portal transforms is actually working on fields. I want to actually go into transform chain, which is all the fields are done. Now we're going to mess with the entire document. So, they don't, they, they say the best in the documentation. So I'm not even going to try to put it in my own words. Plone transform chain provides methods to modify the response of a page. Before it's returned to the browser. So what's going on here and this is an easy transform again I'm going to steal straight from the documentation. I'm going to have the best. Here is a neat way to take your entire clone site and capitalize every single word on it. In case you want to do that. You know, maybe, maybe, maybe it's a request from management. I don't know. I want every, I want my one page to yell at me. And this is how you do it. This is a pretty easy transform. This is the example transform and it's just calling upper on the result. What comes in as a string, what goes out as a string and you implement I transform and throw a ZCML adapter in there and boom, you're done. It's not hard. I transform though. It can be confusing because both transform chain and portal transforms both implement and I transform, which can make it a little bit confusing. So if you do a code search for I transform, you're going to find a lot of stuff. See if you can, you know, just search for it and then I actually say if you want to know how plant throne plant form chain runs you can do a search for the transform chain interfaces. But I do want to talk about what you get into transform chain you're going to find out that the, you know, a lot of clone uses the transform chain already. And there's some standards that I've kind of gleaned from it. I haven't actually read these anywhere. But, you know, maybe I can improve the slide a little bit more after talking to some more people but what you really want to do is if you want to play with your XML document, if you want to play with an XML tree, you need to do that somewhere between 800 and 900, because clone block is going to make it an XML for you and then you can use expat and all your need XML tricks to modify your output. And then clone protects going to turn it back into a string for you before it goes out to the website. Caching compressing that's also also done in transform chain and the top 100 in the thousands. So clone transform chain has this order. As you see there's a class property called order. And that's where the actual transform engine is going to order all the transforms for you. And so setting that order is important of where you are. And to show you a little bit of a demo of what RFAs doing with portal transform chain is that our, our web editorial uses Arabic, they actually for our Uighur people, we actually have them input in Arabic, and we transform that by alphabet into Latin and Cyrillic. We do the same thing with Chinese in our Cantonese service. We actually will have a button to say I don't want to see Cantonese simplified. I want to see Cantonese traditional or back and forth. So our web team is only importing in one language, in one language alphabet, and we use portal transforms to just do some rules to say, Hey, just take this letter the alphabet and turn it to this. It's not that simple. But basically that's the concept is just take a take a letter the alphabet and turn it into another. So that's it for me. If you if you'd like to see any of this example code. It's available on our GitHub repository radio free Asia RFA site. This is a private repository for regulation reasons. But just sending me an email and or sending me a slack message or something like that just to say hey, you know, can you add me. I'm just no problem sharing this code. We just have to keep it private. So it's not available on search engines. So that's about it. Thank you very much. And I will be available in the just see I think we have what 10 more minutes left. So I'm available for conversation and just see after this. Thanks flip. I'm going to go live into the transform chain. I had no idea all this magic was going on inside. It's great that you were able to excavate it for us and show us, I guess a safe route in and hopefully a safe route back out. Thank you very much.
This talk should make you more confident in modifying and extending Plone's powerful and efficient post-publish transformation tools. The first part covers plone.outputfilters - changing the behavior of TinyMCE images and the concepts of mime type transforms. The second part is on plone.transformchain - intercepting the entire HTML reply and changing the content before it is sent to the client.
10.5446/56668 (DOI)
Well, now it's time for product, productizing plan. This talk is from Guido, Guido Steven. And we are very glad to have you with us. Guido, thank you for being there. Well, that's it. Thank you. Okay, there we go. Yeah, it's weird to do the stock in my room at home and not seeing the community, not seeing the people that I'm doing the talk for. And missing the context of, you know, being together in the room. I think we've lost Guido. I don't think it's going to work. Sorry for that. I'm going to try to fix it. Okay, you're back. All right. Start again. You're mute. Because you. You mute. Sorry for that. Okay. Yes, I got a warning from zoom about being last. Okay. Yeah, so just out of our, it's okay. No problem. Okay, let's do that again. Start. Then we need to share screen. And then we need to find zoom. Okay. Okay. I hope you can hear me now and that's almost a problem. Properly set up. Yeah, I'm good. You can go. Okay. Let's go. My name is going to Steven's I'm the founder of quave and I will talk you through how we evolved from a consultancy driven approach to blown to the product approach that we have quave. Bit of an outline of the stock. I will give you a short tour of our design environment. I will go deeper into the problems that a consultancy approach poses with some live war stories from the field. I will go also go deeper into how we solve the tensions there. And finally, a bit of a summary on the remaining challenges that this poses for actually making a viable business out of that. So, for those of you who already have seen it, but also for those of you who haven't seen it. It's useful as a context for this talk as well. Everything I'm showing you here is actually our design prototype. So Quave is a social internet system community system we have a prominent social stream. We have a collection of apps. One of those is a calendar app. We have a new magazine in Quave. There's workspaces. Workspaces people can self manage with a member of the workspace. They can then share and collaborate on documents or in this case you can email that came in. There's events there's tasks which are seen here in the workspace. And there's very powerful search, which is faceted but which also has different variants optimized for different view. And yeah, I guess you can. But so this is our profiles, user profiles and they are very deep. So that's just a very whirlwind tour of the future side we have in Quave. And this is how it looks like now but it's not where we came from. So back in the day things were different. And where we came from is from a consultancy approach where we had big dreams and build big sites. We have a big project in the wrong track. We built big projects and we found out the hard way that one off doesn't scale. So you can build very nice solutions there but in the end what the client wants is not what our users will need. So there's a big tension there between building something that's actually valuable. And also because it's one off you get into maintenance hell. And the illustration is, I think, you know, it's typical for the challenges that you pose in a consultancy environment where you want to do your best for the customer. But the customer has some very clear ideas about how the solution should look like. And it's not necessary that these ideas are very optimal from either a user experience perspective, or from a from a product architecture perspective. So to give an example of how that bands out in real problems is this is a screenshot from an early iteration of a little app, which was used in a predecessor of Quave, a big internet solution. And the requirement for this was discovered only just before launch. So I prepared this project and they were ready to launch this and then just three weeks before launch, the customer comes and like, you know, we need to have the legal contracts in the database migration as well. And no one had talked about that before that there were legal contracts. So, hey, what do you do as a consultancy shop in such a situation. You create a contact contract object type, you know, context, a content type. You build some filters to do that database. You know the drill, you just build that. And it's what we now call developer design. Especially you have the requirements and you whip up a fast solution in development, because it was not budgeted that this need to happen at all. That's a loss. Next thing that happens is a redesign of the project that comes in. It's not aligned with the rest of the project so everything breaks here. And you show to your designer and he's like, what the hell is this. And then you need to fix the UX problems you need to fix the technical problems. Basically, you need to reimplement all that 50% of it. Before it's really acceptable and that's another loss. So you think, okay, we made a loss but now we have a nice legal app, let's sell this to another customer. And we can, you know, we could solve those losses by selling that at margin to our next customer. Why for the new customer has new requirements and the new customer doesn't like the old requirements of the old customer. So you have to build a new variant of the app for the new customer. And the old client doesn't want the solutions that the new client wants. So you also have to maintain the old friend of the app. So basically you have to backport all the improvements you're making for the new customer to the old app. So now you have two apps. And then what to do. The next thing that happens then is that we have a generic toolbar redesign going in which affects like all of the pages in our system, and not just the apps everything in our system. And then, you know, normally that's not it's a lot of work anyway, but then the legal app is special it doesn't conform to our design system. So by it breaks and actually bang it breaks twice and I need to upgrade to non standard apps. And nobody's going to pay you for that, because nobody said you should do it like this and nobody funded like that. So after years of this struggle, the conclusion is, we cannot afford to do things cheaply. That's actually saying by Alex builds a father but it's quite applicable in this scenario. So what to do instead. I call that a product state of mind. And I'm using it from a different angle. It's one interacting with clients that's where it starts it's like you know, give me a problem not a solution requests. So typically customers they ask for a button here. Or you know they want to tweak there, and they think they are helping you by putting it that way and also they, they are often like thinking just in terms of very you know that's that's the easy solution that's the obvious solution. So it starts with saying no to solution here saying no to a solution request. It's like, okay, tell me a problem, and let's analyze that problem, and let's generalize that problem. And it turns out that what the client wants is not exactly what her users need, because it's our job to actually fully understand that field and to work with design and with requirements analysis and knowing the constraints of design knowing the constraints of the technology to find out what the solution is and think through all the other cases. That's not our customers job or customers job is to understand the problem, and to, to communicate that to us. So basically means twisting the whole conversation away from future requests and talking about user stories. And then in a later stage, when you have a design and I will get to that you can show that it actually satisfies the problem. But first you have to understand the problem, and you need to first do the design. And the left you see diagram and it's not really about design but it is about the whole like workflow of you know going from a requirement to a thing that is a production. And the earlier you detect a problem or a bug, and a bug can be a design bug or a requirement bug. And the more you detect that the more costly it is. And I always say like, I've had a lot of customers who had messy business processes with inconsistencies and if you just you know take those one off and translate those into code and into design, because it's inconsistent requirements you end up with inconsistent code. We call that buggy code. So it's really like trying to move that detection as far up front as possible. And we do that by doing a lot of design, by taking a holistic picture in design so our designer, he has this holistic view of the whole system. And you cannot just do a like a local locally optimized solution and not, and then ignore the follow that has elsewhere you need somebody who takes care like okay if we change this, we're not consistent anymore with what's happening there. We need to solve all of that at once. We do that in what we call proto which is our design environment. It's our funded only system, which does not have any blown back and at that stage. We do that real fast. And it allows us to actually do proactive design or the design features, and we can actually show them to clients before we even actually build the system, we just have a design. And I think a lot of this comes down also to a misconception about what design is. It's, people see the beauty in the design, and I think it's about the colors and how, you know, how the rounded borders are set up. But that's actually not why it's beautiful. It's beautiful. Because design is not what it looks like it feels like design is how it works. That's a quote from Steve Jobs. And that to the to you know in depth into how it actually should work. What you can get as a work of art that in the way it functions and you know in the interactions that it offers. It just feels same. It feels nice it feels smooth. And that's the function of design. You still need to then sell that and how do you do that. So you then have this generalized design and you work with your clients to come up with a generalized design. And it's presented here on the slide in a linear fashion like find generic product value and solve this client's needs with the generic solution. But that's actually of course it's iterative you you come up with a generic, you know, an obstruction of what the client wants in a way that it can serve all clients. And you validate that with this specific line that the generic solution is actually fine for them. The way we do find generic product value is by aligning with product wide affordances. So that's stuff like the social feeds we have we have workspaces we have bookmarks we have workflows, we have user profiles we have search. So those are all you know generic affordances that are product wide. And you can mix those in new shapes to come up with new solutions for clients. And that basically means. That means extending solution patterns that we have in such a way that they embrace this new emerging requirement. And if needed we can also add a generalized app. If that's a new requirement so new that it doesn't really fit within any of the existing applications we have. Like I said you then solve the client's need with the generic solution. And it may be that you then need to parameterize configuration if needed. So you make like a small twist of the generic solution. And the key is to not make that a client specific design. It's more like we recognize that this product can be used in multiple ways, one of which is optimal for this client. But it will be useful for other clients as well. And it's still like a variety of the overall thing. It's a it's a configuration. It's not a fork. Finally you ship an MVP. You ship a minimally viable product. If you want to do it right and build a good product. There's no way that on for a single customer you can ship a kick ass feature that is fully worked out. Unless the client has a lot of budget that we can use it really hard but else you will need to amortize that over a longer time. You need to build something that is good now that delivers 80% of the value for 20% of the effort, which is the birth rule. And then later you can return to that if this and also that's actually good, because then this client gets to give it to their users. The users get to experiment with that they get feedback on that. And then all the initial ideas you have about how they should be evolved, they actually become grounded. They become changed by the practice and then you can build the extension later informed by actual usage patterns that you're seeing and actual requirements that people articulate. So what that ball is boils down to is that you should build half a product, not a half ass product. And this is Jason Fried from 37 signals. And I like that approach. It's like instead of building something that that is fully functional but you know, is weird. It's better to build something that is really slick and has a lot of potential to grow into the future even if it's not fully realized yet. It's still already useful. What it gives you if you manage to have that balance and get it right is that you reach a point where you can experience scale and synergy. So deployments are QA tested product wide, which means that if we do a deployment of a product and we talk about it like a product, it's like roll out to all our customers. We don't have to test the whole product for all our customers, we can just test our product. And then we make a few checks that you know the themes for these customers are still working, but the functionality and all the core interactions they're just in the product and if they work in one team they will work in the other themes as well. And it also means that we can have automated upgrades. So we can just script a new, a new point release, and it goes out. And like, and that then also translates into fully scripted deployment so the upgrades are scripted the deployments are scripted as well. On the business side in terms of maintenance and extensions, it means that we, we deal with this in a subscription manner. We have a client's players for subscriptions and we are, we are able to handle all of the effort that it takes to increase the quality of the product as a whole for all our customers. Well, we share the cost of that across our customers. It allows potential for cross sales or multiple customers they want the same feature while we can pull that money and build a better variety of that feature. And it also allows for upsell like we sold the future to one client, and we made a basically a lossy investment on that thinking that we can sell it to another client and actually we can. And then we, again, we can build a high quality product that way without necessarily building everything to a single customer. That's not all rosy. It still leaves some prominent challenges and product space, which I will go into for clients for developers and for the business side. Clients is the key here, and the approach here is to nib it in the book, which means no one of customizations. I googled for an image for the kitchen sink. I thought this is quite appropriate. If you follow the lead of that one client that you're currently working with, you will end up building a kitchen sink like that in the shape of a guitar because this client has a guitar. He wants to wash his guitar. He also wants to wash his hands. This is the optimal solution for this client. Of course, we all understand that the next client who maybe doesn't have a guitar. They may have something else. You know, it doesn't fit. So instead you design a solution that, you know, allows this client to wash their guitar without giving a guitar shape to everybody. And the solution pattern like I hinted at us before is that we have configuration as design product variation. So it's not that we hack the system for a specific client is that we actually in the design phase integrated the requirements for this client in such a way that they will be satisfied with the product. And an example of this approach is that the customer brands we have our themes, they are not software. So they are CSS only. Let me just quickly show you how that works. So this is one thing. This is not a client. This is not a client. It's a same product. Just style differently. On the developer side, it means that we as developers have to submit to the paradigm of design first, which means if it's not a broader if it's not in our design system. It does not exist. That's really hard as a developer you see something that isn't fully worked out you want to fix that. But you know we don't do that we we, while we try not to do developer design. And the solution pattern here is that we do design calls and design reviews with the designer. And then as example is for example we recently did a project for 500 error pages. If you see this design in the left pane as a developer, you see the support thing being under linked and you think oh that's a male tool, and it will point to the registry record for the maintainer email address, because you know how blown works and that's the obvious implementation. So I put that to the designer he says no no not a male to link. There should be link to a help page a support page. And I like, oh but we don't have the support page and he's like, yeah yeah it is there it's in the admin app, and then oh, we didn't build that admin app yet. So you have to go back there and then negotiate whether you can start up with a male to link. But still we then need to buy into the full vision of having the full solution for all of that there, and then working in and development and engineering working towards the design vision. So design is in the lead and then we follow and build it like that. On the business side that results in technical debt, which is unavoidable. We try to keep deaf and sync with proton so we try to keep the plowing system in sync with the design system that we have. So design is in the lead we are by definition we are behind. On top of that we have ongoing maintenance because we need to ship. We do make compromises while we're shipping, and then they come back and we need to return to that and approve that code. And the solution pattern here is also to do proactive design so to actually to invest more in the design and then to sell the solutions to clients, but also to make sure that design in design everything has been flashed out in such a way that once once we get around to fixing stuff on the development side, we actually do it in a way that moves it close, close up to the articulated vision and design. And an example for that is the admin app, which is the design here. We don't actually have this yet so well we have the functionality of having external apps. We currently configure those in Barcelona and so we're raw blown. That's of course not acceptable from a UI perspective, you know, Barcelona does not part of our UI it's like a workaround. So then we have an articulation of like, okay so once we want to build this properly we know how to do it. So if you don't work that way and don't manage your technical debt, you will end up with newspaper articles like the one I'm quoting here, the piece in the Guardian which which is actually quite that big nice piece about technical debt and they say like, So why did Blue Origin, you know fall behind to SpaceX and their conclusion is it's an immense amount of technical debt defined as engineering challenges that build up as a result of choosing a quick solution, rather than the best solution, which I think is quite appropriate. And so this is something that can kill your company if you don't take care. So there's a tension between customer intimacy or product excellence, traditionally in management speak but you actually both you need to work really closely with your customers without being captured by their requirements or by this solution you need to then realize an excellent product that they are satisfied with but that you can also sell to other customers. Should you do this in an agile fashion, gradually incremental or do holistic design and actually you need to do both. You need to work ahead and design, and still you need to be very flexible, both in code but also in the design it also needs to evolve. How do you handle technical debt, well it's unaffordable, you just need to manage it and you need to constantly keep investing in reducing technical debt because each time you're also incurring more technical debt. So this is off while I was preparing this talk I saw this stuff on Twitter. It's like, you know, somebody quoting a product leader from a well known company, and the product leaders as you know honestly sometimes I want to go back to my old job, which was more project oriented more feature factory, because people underestimate how hard this is you know this design this product to bring an approach. And then this guy answers like you know well he would like to go back to the future factory because he would be on the giving end a lot of the receiving end. I thought that was quite funny because you know it's nice to build new features. It's perhaps less nice to be a user of such features if they're not fully well developed. And it's certainly not nice to be the guy in the back room who has to maintain those old features that somebody else built. So, well, thank you very much. That's a very good talk. Very interesting.
Coming from a traditional web consultancy background, we've launched Quaive as a product-centric "built on Plone" solution. We still need our clients to be happy, and we still to a lot of consultancy. This talk highlights some of the tensions and pitfalls resulting from a hybrid consultancy-product business model. You'll walk away with insights on the importance of: design, expectation management, architecture, and balancing technical debt versus making a quick buck. This talk takes a product owner perspective and will be useful to both project managers working with clients, and to developers who want to deepen their understanding of the business side.
10.5446/56669 (DOI)
Welcome back to the Plone Conference 2021, day three. With me now is Tomas Schor, who's been a longtime Zoop and Plone developer. Great guy, very calm, but with a really good sense of humor. I mean, look at him, we're friends. He's gotta have a good sense of humor. Tomas last year introduced to the world the Pyruvate Whiskey server. And he's here today to give us an update on what's happened since then. Please go ahead, Tomas. Thank you, Kim, for introducing me. And hi, everybody. Thanks for having me here at Plone Conference. At last year's Plone Conference, I presented Pyruvate, a Whiskey server, which is implemented in Rust, mostly and a little bit of Python. And I want to give you a project status update and also show some performance comparisons with other Whiskey servers. Let me start with a short introduction to Whiskey and its use in Zoop and Plone. Whiskey is the Python web server gateway interface and is specified in Python enhancement proposal 3,333. It's the standard interface between Python applications and a web server. And it's pretty old. It was created originally as Python enhancement proposal 333 in 2003. And then there was an update in 2010, mainly because of Python 3. And as for Zoop, Plone and Whiskey, we all remember the community efforts to migrate Plone to Python 3, led by Philip Bauer a couple of years ago. And during that migration, Zoop 4 was introduced and Zoop 4 replaced C server, which was bundled previously with Zoop 2 with Whiskey. And all that happened actually quite late, given the age of the Whiskey PEP. And during that migration, I contributed a bit to the Whiskey setup of Zoop and Plone. And I was also exploring working Whiskey configurations with existing servers. And eventually, an out of curiosity, I decided to start writing my own Whiskey server also because I wanted to learn Rust and thought it would make for an interesting project. And my goal, the goals I set up for myself was to write a multi-threaded server with good performance that could be used with Zoop and Plone. There, another back, maybe some more background. Since the Soudi B is not threat safe, it turned out that there is actually a very limited choice of Whiskey servers that can be used with Zoop. And the Zoop documentation actually recommends only two Whiskey servers. The one is Waitress, which is the default and it's shipping with Plone. And it's implemented in pure Python. It has a good overall performance. And the other one is Puran, which is a fast single-threaded server written in C. And other popular Whiskey servers show poor performance with Zoop. Why did I choose Rust for the implementation? Rust is great for writing fast and secure code. And also it's becoming steadily more popular. It's been the most loved programming language in Stack Overflows developer survey for five years in a row now. And also it's great for extending Python. And there's a prominent example we all use and you might not be aware that it contains Rust code and that is the cryptography package available on Python package index. There is a setup tools Rust, which help you building Rust extensions to Python. And there are a couple of Rust, Cpy, Rust Cpython interfaces. I'm using Rust Cpython, but there is also Py03. And I think there is Rust CFFI also. So what is Pyruvate from a user perspective? It's a package available from Python package index. So to use it, you would pip install pyruvate. And then it's an importable module. You import it, you have your Whiskey application and it's basically consists of one function that's named serve. And you pass in the application as parameter and the socket you want to use and the number of worker threads and that's it. You have a running Whiskey server. To use it with soap blown, there's still build out still around. I heard Jens Kline's talk yesterday and we are all going to move to pip install blown, but as of yet, there's still a build out. And since Pyruvate has a paste deploy entry point, you can use it with build out as well. You can specify any template in your blown recipes, in your instance part. And the template is pretty simple. You have pass in the egg entry point, the number, the socket you want to use again and the number of workers. Quick rundown on the features. It's Pyruvate supports active Python versions. Currently that means Python 3.6 through 3.10. I already mentioned it's got a Rust C Python based Python interface. I'm using MIO, Metal IO, which is a Rust create that is part of a larger framework called Tokyo, which is a async Rust framework to implement the IO event loop. And then I'm having a worker pool. So I lay off, so I'm collecting the requests. I'm laying off the requests after accepting them to a worker pool. And then there's a paste deploy entry point. Pyruvate integrates with Python logging. So it does quite some logging. And it's doing as-info-logging, meaning there's a dedicated logging thread. And so the Rust code that produces a, that creates a log message, does not need to hold a global interpreter log. You can use it with TCP or UNIX domain sockets. And also it supports system D protocol socket activation. The current version on the Python package index is 1.1.1. I've changed the development stages to production stable in version 1.1.0. So the latest version only has Python 3.10 support. It supports Linux on NMECH OS. It's hosted on GitLab. I'm having quite a couple of tests now. The test coverage is at 89%. The reported test coverage on, and there, but it's important to know that there are Rust tests and Python tests and only the Rust tests currently add to the reported test coverage. So the Python tests, which are implemented as talks tests, are on top of that. So the Python test, which is a test test, are on top of that. So the actual test coverage might be a little bit higher. There is binary packages for Linux, so you don't need Rust to use Pyruvate. It's mainly Linux 2010 wheels. And so it's important to have a, to have recent PIP or N-setup tools versions to use it. Let's talk a bit about performance. There is a pretty well-known analysis of Whiskey Server performance done by Omet Habib of App Dynamics that I used as a starting point for doing my own performance benchmarks. And that benchmark was published in 2016, and some of you will know it. And he compared six Whiskey servers. They were Bjorn, Cherry Pike, Unicor, and Mind Health, Mod Whiskey, and New Whiskey. And as I said, I fused them as a starting point for my own test that I want to present today. And so the benchmark, they carried out by using a Docker container. So they isolated the Whiskey server in a Docker container and allocated two CPU cores to that container and 512 megabytes of RAM. And the testing was done with the benchmarking tool called Work, which you can find on GitHub. And the servers were tested in random order with an increasing number of simultaneous connections, ranging from 100 to 10,000, which is quite a lot. And Work was also limited by using task sets to see two CPU cores that are not utilized by the Docker container. And each test lasted 30 seconds and was repeated four times. And I thought that setup was quite good, and it suits what I wanted to do, and I like it a lot. And yeah, that's why I did it. Then the number of sustained requests, errors, and latencies were provided by Work, by the benchmarking tool, and Docker stats provided the high CPU and memory watermarks. And the highest and lowest numbers measured were discarded, and the remaining values were average. That's the procedure they carried out in the original benchmark, and I just kept it. Because yeah, you could do it differently, but yeah, I just wanted to stick with it. But you can see here is the Whiskey application they use for the benchmark. And as you can see, it simply returns okay. As a list, as a it's a label. And note the code, note the comment that saying doesn't really do anything since we're benchmarking the servers, not this code. And we are going to discuss that in a bit. So my changes to the original setup was I swapped Mineheld and ModWiskey for Pyruvate and Waitress, because I wanted to keep the servers that were reported for working with the so-called load. Then I changed it to use Python 3 only because it was because it was still using Python 2 in some cases. And I decreased the number of levels for the number of simultaneous connections. Also the original benchmark reported very disappointing results for the new Whiskey configuration. So I thought I'd change that a bit. And using only one process with two threads. The CherryPy configuration did not work any longer because Cherute, the Whiskey server of CherryPy is now packaged separately. So I had to change that a bit. And I swapped Docker for Portman because that's what I have on my machine. And yeah, some, yeah, the original application simply returned okay is a string, but it should be bytes for Python 3. And I've put a fork on GitHub for those who are interested. You can find it. The server versions are the recent versions. So I'm testing Waitress 2.0 for those who are interested Pyran 3.1.0. And the Docker containers running on Debian 11 verb was version 4.1.0, which is the last released versions. Let's look at the benchmarks. So number of requests served. And there we see something that was already reported from the original benchmark that Pyran is really outracing all the other servers. It's a lot faster. It serves a lot more requests per second than any other server. Then after that, we can see that there's, yeah, Pyro wait is not doing bad. It's got a bit of problems at the start. So it's obviously it's still better than Cherute and Cunecor, but when there's a low number of simultaneous connections, new whiskey and Waitress are doing slightly better, but then it picks up and it can sustain a higher load. Let's look at the CPU usage. And we can see that Pyran is at exactly 100% because it's a single threaded server. And the other servers can use a bit more CPU. And there's Cunecor, which is, which at higher loads cannot use a CPU anymore and goes down. So CPU usage goes down. So using more CPU is actually not bad in this case, because it's showing that the multi-threaded servers can make use of more than one CPU core. So also the servers have multiple threads. Most of them have two threads except for Cunecor, which follows the Cunecor documentation, which is five threads for two CPU cores. So it's their recommendation. But as you can see from the chart, it's maybe not a very good idea. Memory usage is more or less okay for all servers, except maybe Mewski. This is also already reported in the original benchmark that it's consuming a lot of memory and maybe too much memory and there's maybe an issue here. And then there's errors to work, report socket errors. And we can see that with increasing load, all the servers start showing errors except for waitress and pyroret. They don't show any errors. And Mewski is hidden here because Mewski will show errors for every single request. So there is this and but it still serves those requests. So if you use a browser and open Mewski request, then you will see the page or you will see the okay text. And that maybe points at an issue. Of the benchmarking tool. That's at least an idea to explain that because it's not actually the tool reports errors for every request, but the browser doesn't show them. And we reverse proxy won't show them either. So as we have seen there, there are a number of Mewski servers that are not doing bad, like waitress and pyroret is also, I'd say, better than average. But the actual question that comes out of this benchmark is why is Peron so much faster? And of course it's a good, very good, risky server implementation and it's got many optimizations. And it's implemented in C and C is always good for speed. But then again, Mewski is also implemented in C and C++. And I haven't, that question was already raised in the original benchmark and they suspect that there's something wrong there with the measurement or with the setup. But I have an idea on that and that is that that switching from a single threader to multithreaded server comes with benefits and costs and shared access to resources actually adds to the complexity of the risky server programming. And also, if we do, if you're offloading web requests to a worker, to worker threads, that only makes a really sense when there is actually something to work on. And this is what both Waitress and Pyruv8 are doing there, basically accepting requests there. So this is an R and they accept requests and then they offload them to a worker pool. And you might now remember that comment in the Whiskey benchmarking application saying there's not really anything to do here. And I think that there's a point here that the Whiskey server, the multithreaded Whiskey servers have additional complexity and that shows in the benchmarking results and the application actually is better for a single thread server. But of course, if you have something to work on that, that might make sense to use multi to use a worker pool and also there might be shared use of expensive resources and also that you can realize with a worker pool like a database connection. But it has to be said that Python's global interpreter lock generally makes multithreading a bit less effective. So let's look. So with that in mind, let's look at a more realistic scenario and let's start benchmarking blown. So starting with that benchmark, I created a Docker image for blown 525 containing three blown instances with pure and pirouette and waitress respectively. And I have modified the benchmarking script. So I have, yeah, well, in the Docker file I have increased. Yeah, no, that's in the benchmarking. I have increased the memory to one gigabyte so there's a bit more memory because I thought blown might maybe need more, a bit more. And I'm also benchmarking different URLs not only one URL. And we'll see that in a bit. And I'm doing some cash warming before the actual measurement because blown and so are using cashers on several levels and I thought it's it's maybe a good idea to to warm them up. So I'm going to go back to the fork and get up. So if you're interested, you can check that out. And then I've prepared a smoothie be with a blown site and the URLs that I want to benchmark. And I'm mounting that into the container. I'm using a benchmarking blown 525 but I've pinned waitress to version two dot zero because blown to five to five still ships with 144 but I wanted the latest waitress version. And it's compatible of course with with the latest blown. So, once again, if you look at the, the request serve for slash URL. Then we can see that you're still the fastest but but there's, there's the difference is not really that big anymore and then follow it's followed by pirated and waitress comes with a bit more of a distance. So, yes, so we can see that the difference is much, much smaller now so there is obviously an impact of the complexity that is added by the application. If we if we benchmark the blown URL then it's becoming even less of a difference so you can see that they're there, both that they're all three servers are approximately the same. It's with increased workload. And we are even starts getting worse, but we have to look at the errors for rendering blown, we can see that with increasing load there's a lot of socket errors report now by the benchmarking tool. And beyond starts out a bit earlier than the two other servers, but the message here is really that you can you can have that many simultaneous connections to request blown URL. So if we look at the CPU usage, then we can see that all of them are using 100% of CPU because in this first setup I have used one worker for each server. So we see that all three of, yeah, we see what we expect basically. So one worker, then it's 100% of CPU. And also benchmarked blown as Jason so I'm sending except application Jason as a header, and which might be important for use with blown six, because we are bypassing diaso in this case, at least. But we basically see the same result. We see that there's pure and followed by peruvid and then comes weight was last. And the most interesting benchmark is when trying to download load a blob so if to the database I've uploaded a blob from the in into the blob storage with five to two megabytes of size. And here we can see that peruvid and pure and are really a lot doing a lot better than weight risk when it comes to serving that blob. So now, in the next setup, I have increased the number of threads for pirate and waitress so where possible. And we can see now that pirouettes is already better than pure and in serving in serving requests, whereas waitress starts out better but with increasing load quickly, quickly, keep it keep up. We can also see that CPU usage is now actually differing a bit so be honest of course still at 100% and pirouid and waitress can to a certain degree make use of the additional CPU. But it's not much actually. And I decided to go a bit further and have not only two CPU cores for the next benchmark but for CPUs and keep the threading setup for pirouid and waitress at two threads. And now we can see that actually both pirouid and waitress can serve more requests per second than pure. Throughout even with increasing load. And yet as some some some conclusions I have on that so Bjorn is still the clear winner when using a single worker for all URLs except blown. And since blown is a bit more of a complex page and there's, and then there's no real difference in the number of requests serve. And Bjorn maybe even showing errors a bit earlier than the two other servers. And adding one thread and sufficient resources let's both pirouid and waitress perform better than Bjorn so it looks here as if there is really a benefit benefit of having a worker pool and of having a multi threaded server. And another conclusion is that all configurations fail to sustain higher loads for blown. So, you cannot have, you cannot request more than 50 simultaneous connections. And a result that I think is pretty important is that Bjorn and pirouid are serving blocks a lot faster than waitress so there's, there's real difference here. And also I, I'd say that pirouid can challenge waitress in all scenarios, and when adding worker threads pirouid seems to make a better use of the added resources than waitress does. I'm coming, coming to the end and I want to, I want to present a chart to answer a question that probably some of you might have, which is, does it really make a difference is it, is it worth using the whiskey server. And we have been using peruvian production now for a couple of months on different sites, plain soap sites, our clone sites, there's also a pyramid site. And I, what I did here is I made some sort of a B test using a CEO to CEO clients one running one using pirouid and one using waitress they're both on the same virtual machine. And in front of them is an Apache web server we use Apache in this case because we have to do some authentication and it's, it's, yeah, it's, yeah, it's policy to use Apache for that. And this Apache web service doing fair balancing between those two CEO clients, both use to worker threads. And, as you can see this is a very low traffic site, but still. There's a clear difference in the number of requests that gets directed to pirouid compared to those direct to waitress. So, what you can see here is that that that continuously pirouid gets gets assigned more requests by the by Apache by the Apache reverse proxy in front of it. And actually the number of requests is are served is lands to 47% on waitress and 53% are served by pirouid. And that difference would also be significant if you would, for example, to a T test for par examples, then it would be highly significant. And yes. And with that I want to, I'm coming to the end. Thank you very much for your attention. And yes, consider using pirouid. I think it's pretty easy to replace wait with with pirouid and maybe it suits your setup and your configuration. Thank you. Yes, that's brilliant. I had very little insight into whiskey servers. I just knew we had to have one. And so it's great to see all the detailed work that you've put into this and creating pervate and continuing to work on it and improve it. And I am curious to know how soon we're going to get this as the default whiskey server for plume. But thank you very much again to my shore. Everyone, please join us, please join Thomas in the jitzy, I'll be posting the link in the slack or you can find it right under in your large swarm window. And thank you again. Let's talk over in the jitzy.
At last year's Plone conference, I presented Pyruvate, a WSGI server implemented in Rust (and Python). Since then, Pyruvate has served as the production WSGI server in a couple of projects. In this talk I will give a project status update and show how to use Pyruvate with Zope/Plone and other Python web applications and frameworks. I will also present some use cases along with benchmark results and performance comparisons.
10.5446/56670 (DOI)
Hello everyone, happy to be there. So on this track, we have now Kim Polissen from the Levant University, who's going to talk to us about revamping the plant training and documentation setup for plant 6. So thank you very much, Kim. That's all yours. Okay. Hello and good morning or evening everyone. Today I already messing up my slides apparently. No, I will talk to you about revamping the plant training and documentation setup for clone six. I actually am a Chinese volunteer, did they say that in English in English to give this talk. So, let me give you a short overview of what we'll discuss today. So, I've been working alongside the documentation and training teams for the past months, as you will know we've, we're going towards a new plant six new major version so obviously, all the community teams are huddling up and finding a nice excuse to finally do some good work on all the front end public interfaces that we need for clone six so that obviously includes the documentation and the training documentation because all of that since we have Volta was the new default front end. There's a lot of things that will change for that, and it will have a difficult new look and feel. So, all the documentation and training needs to accommodate those changes so we give the clone six a whole new vibe. So, in order to do that, just to say upfront, this is just an informational talk. There's no development going on but we did feel that someone should present at the conference, where we are, where we came from where we are, and where we're going for both the documentation and the training setups. For those of you who follow the trainings you probably already have seen the end result but this is a bit past present and future of those things and last but not least a question to all of you to contribute or give a comment on what has been done or how you might help in the future. Just to say that I'm just a messenger discussion sometimes get heated among developers about how and well how you should do your documentation and especially which tech stack to use. So this is just to say, I'm just a messenger, don't shoot the messenger. Okay. Next up. I can discuss the training and documentation and the vacuum as I said, this is a whole new major blown version so all the front ends, get a revamping blown dot org is another very important part of this. And as I just mentioned here the idea will be that blown org obviously gets the new, the default UI, I think we name it there's a lot of naming issues and on which, how we should call our front ends, but the default front end for plan six is Volto. And there is a separate talk about that so I don't need to cover all that in this one by a Riko peco oxanne tomorrow. I put the links here in case you want there's also a new item on the plan org site, if you want to read more about it. And his talk will be tomorrow, I think at 530 or 630. The link is there, you can look it up, and he will explain you more about that part of the changes for plan six. So, these are just screenshots. This is blown up now. And just to mention that this is obviously not what the new plan org will look like we will see that later but this is just to showcase that there is a Volto demo blown site where you can look just to see how plan six will look onto the main parts. This is unfortunately still being revamped, but we're close. It's revamping because it's still an ongoing process. But a brief history of time of our blown documentation. So, main technology, which I will cover for most things also for training. We are a Python community, and in the past, both training and documentation have always been made with Sphinx and written in restructured text. And restructured text is, yeah, I don't know if it is a markup language, but it is a language in which you can write that specifically made for documentation it has a lot of extra features like navigation glossary easy ways to add links and headings. So it's a really built for documentation and Sphinx is you can pip install Sphinx it's a Python package for those of you who don't know is built to create documentation system so it takes restructured text, and then transforms all of that outputs everything orderly in HTML latex, and a bunch of other formats. And that has worked for us pretty well it's sort of a Python standard. So that's what our documentation was obviously built on I think read the docs is a name that people might know in regards to this. So, that was the technology. I'm also naming here a list of the main chapters that we have in our current blown five documentation. Why not because that well I do want you to read all of the documentation of course. But this is a talk about documentation and it might be nice for everyone to know. So, a chance for me to give some tips. I put the main chapters here because when you write documentation you have to write documentation for a bunch of target groups, which in exchange will become more important when you need to change or update your documentation like we are doing now to keep that in mind. So we have for example working with contents is a part that we should not forget it's for the target group of your clone editor so it's about how do you make pages how do you make a news overview how do you publish items. So the target groups of your editors are very important. The others are adapting and extending blown which is a bit more for your power users, like doing site settings and things like that. Installing managing and updating blown is then more for your system and groups the people who actually need to maintain blown on the servers on your company or universities or whatever, and developing for blown is I guess what's well that's probably because I am a developer, the part that we read the most when we get stuck and something doesn't work. There is a lot of technical documentation needed. If you want to develop for blown for blown add ons or for customizations in your own companies. So we can keep in mind when we need to update stuff. So this is just to remind you, and it's a, I named it current loan documentation because so far, this is still the documentation we have versions of course for blown for and five, which are still live. And then, let's move to the present. So, what happened. We mentioned before that we're moving to plan six, which will have full to us the default front end. Which means that. Well, we sort of decided to go to markdown and then look at source. Markdown is yet another markup language. And it is not just yet another because it's basically over the past few years become the standard in almost any sort of documentation site, you can find on the web. So it would sort of make sense for us to use something that is widely used in the web community. And the docu source, as you might have mentioned is react based, which would fit nicely together with photos since that's also a react based product docu source is a static site generator for those of you who are wondering specifically tailored for creating documentation sites, a bit like Sphinx actually but in a JavaScript framework. Why the shift to markdown and a react based static site generator, which is sort of one of the more important things that I wanted to discuss. Since blown is, I put here a headless CMS it's actually sort of the coupled CMS had less CMS is if we would not have a default front end but we do we actually have to well one default front end photo and an extra one so we're decoupled CMS and a react based front end. So to make it easier for new contributors but also for any contributors, we move to markdown because of what I said before, it's very easy to adjust markdown. Now, most of our code we're open source or most of the code is on GitHub, it's very easy to edit small things. And most of your contribution will not most but a lot of contributions that people want to give. And so, you know, feel comfortable doing the whole tech stack is just fixing a typo adding a sentence adding an image, doing something really small and editing markdown on GitHub is one of the easiest ways to do that it's like a little CMS to do the documentation for a big front end. And so, that's to make it easier, you have easy preview on GitHub, and most ID support markdown formatting so you have a lot of stuff coming out of the box for that. So documentation will need to be redone at least for the Volto front end and the people who could contribute to that will probably be people who are used to working with fall toward least JavaScript related text tax, and to make it a bit easier on them. And also be nice if they do not need to completely switch text text. I hear what you're saying. It was a bit doubtful because obviously, most of us are still Python people and if we want to then really contribute to documentation we need to change the text text so I know there might be an issue there. But for the people who will need to create a new documentation, it will make it easier for them because they do not need to switch those text text anymore. And of course, you need to focus on the content. It's like Chrissy said in her talk, it's more important that we have people contributing, and that they can just focus on the content and not have to worry too much about the difficulty of the text tag to set up a documentation environment. So that's why we switched to all those things. Then, for everyone to know. So, as I said, we are not there yet. But we do have several teams working on creating new content. There are several branches which is why I specifically added all the branches that I know of that are working on because obviously people are working on various things and different things but these are the main branches that we will be working on for the next six. The main clone six branches, just called six. So it's branch called six on the get a repository of clone documentation. And then obviously we have one for each front end, and the back end which is basically blown the CMS that what most of us clone developers are used to. So we have to set up to use the two front end options so there will be separate documentation because we have both the classic UI team working on the classic UI and the Volto team working on the Volto front end, and they are still pretty much different in ways of editing content but also customizing any part like, I don't know listings, headings, views, the whole technical way of doing things for that is completely different. So at least for now they will keep their separate space in the documentation. And then the other thing we should know is that we will not copy over anything from clone five. Well, anything, probably, especially for the editors, small snippets of content, but the idea is that the setup is now completely different from clone five being a decoupled CMS with the separate front ends. A lot of things will have to be created new. So this is a good chance to go start from scratch. There was an enormous set of documentation, most of the technical and some of it very expanded on lots of things and let's be honest, some of it not so complete for other pieces of our system. So it would be good to start from scratch and really focus on the things that are really important for people. We have no fear, the old documentation will obviously still be there. We know many of you as us at the university do, we are not ready to move to plan six yet. So the old documentation will be there for anyone still needing to customize or make changes to their current environments. This is so far a little demo of what the new documentation looks like. I will see if I can. I have it running here, just so I can click around it's really in a demo state. But as you can see, it has a little photo setup, it has already some nice highlighting. I don't really know if we have anything here. A little demo page. But this is just to show you what it sort of will look like. As you can see the colors are there, the logo is there, it's the Volta front so it's going to look like the basic blown site. That's the idea to keep a look and feel over all our front end solutions. And then that will be filled. It's not that there's no content yet. The, as I showed in the separate branches. Present back here. So different teams are working on it. And it's just a matter of moving in the markdown from all those different branches into the main branch once it's in sort of a state ready to go. So, that's the future part now. So work will continue. The idea is to have up to date documentation for the different parts, especially the most important ones now for plan six. Improve some options for contributing but hopefully, hopefully we can all just work on the markdown based documentation and write some markdown text and everything else will be magically done by the documentation team and the Github actions and whatever we will add soon to fix that for you. And one of the things I also would like because it was missing from the current documentation it didn't always completely update live so that is one thing that we will look into to the GCD meaning that on git push every little change that you do even if it is just fixing a link that it will get updated automatically to the documentation, like it already does or did for the tree. For reference for later. These are the links to the GitHub repo. If you have any issues regards to documentation please file an issue there. And the documentation is on docs.org and will hopefully very soon contain the new version of the documentation for plan six as well. For the documentation, I would like to do a special thanks or acknowledgements to documentation team, the Volto team and the classic UI team who I know are working very hard to update their documentation. And of course, everyone who has ever contributed to it. There was a lot of work done and still being done so thank you for that. All right, phone training revamped because this is that's the good news. The phone training as Chrissy also said this morning has been completely revamped by now. I hope a lot of you attended the trainings and have gotten to see the results and have made good use of it. So this is a bit the story of how that all came to happen. So, again, as in the documentation, we are a Python project so we were using the default Python tools to generate documentation again restructure text and specs. In the process of that, for the same reasons that I mentioned before, there was also a lot done to change the setup for that. And the same here, a lot of photo related trainings have been added, especially for this year's conference because we're, we have a plan six alpha, and we have a lot of people already working with photo and they already made a lot of photo trainings, or related trainings. So also here, we moved to markdown, although it's not really markdown. Our training documentation has been very good. It's it's sort of a point of reference when the developer documentation didn't have it. There were a lot of use cases that were only covered in training. So they did have a need for a very technical way of documenting things also because it's in classroom training they needed special things like solutions that should be hidden. And a good navigation and a table of contents way to find out which training you're in. So they had a bit more things they needed requirements than the documentation. So this is actually at the moment still staying in Sphinx, also because the documentation part is not done and we haven't fully extended the docu source setup to fit all the use cases that training might need. But so it still Sphinx, but it's Sphinx with markdown and that for some of you might be a surprise it was a surprise to me because it was one of the first to do the proof of concept and I thought that yeah that that's impossible, but it is very possible. And I will pronounce it the way I should pronounce it. It's my ST markdown. So it's sort of my restructured text markdown I think the abbreviation is. I usually call it missed, but I know that the German colleagues might have a problem with that because the German is not a good word to say so my ST but I will say missed by mistake, maybe. But what is missed, and I also put the link here for anyone interested in reading because it is very interesting. So it's all the benefits you have in RST, but it's added inside markdown, and it's added in a markdown like syntax on top of your, like you have headings and code back ticks you have special markdown like code back ticks and some parameter to do, I don't know, I'm going to show an high thing or acknowledgments or side note or a little panel, and all of the nice things that RST has they provided syntax for, but in a markdown format, which means that people who like RST and all its functionality can still write that, but in markdown, if you just want to write markdown can just write markdown as if they don't know RST or don't need any of the special features provided by it. What do I need to say about this also, there is a specific team that we use it's called Sphinx Book Team. I think it's actually called Book Team because a lot of the Python open source communities are still using it for their documentation, and the book is a reference to Jupyter notebooks. So it's actually a team where if it would be needed you can integrate complete Jupyter notebooks inside, which I think is really cool but irrelevant for the rest of this conversation. So, with that new text tag, there are a lot of main features that Katja and Philip from training team needed to create their training Katja and Philip do the mastering clone trainings the past three years. And they really wanted some features to be in that the previous setup didn't have which now the Sphinx Book Team provides, including then the nice markdown, the option that people who are not used to RST can also use markdown. Main features being a better search. So you have a very nice search where it also shows you which chapter you can find the reference in. There is a way where if you show code snippets you can copy code. It has a sticky table of content so at the beginning of each chapter, I will do a little demo after. It has the exercises with collapsed solution so the people don't really see the solution once they try to do their exercises and training class. There's apparently still a presentation mode and glossary is still available. The training team. They did a lot of changes for everyone, prior to the conference. So we restructured all the trainings put it all in a little different order so it would fit together nicely. And all the trainings that we have were migrated already from RST to my ST. We did a complete overhaul of improving grammar and syntax added the Git of actions for the new setup to be able to on Git push everything just goes live. And also, I think added a lot of separate sections for the authors and the trainers so you have a little contributing section where everyone can find the information they need. So, there's a full reference there's an issue on the training GitHub that I referenced here that has the whole list of features and us discussing bunch of things around that. To the new training. This is what it looks like. Again, people who did the training I've already seen it. I will do a small demo here. So this one. Yeah. So this is what it looks like it has the logo and the colors and again it looks always pretty clean but still referencing the same look and feel of it. You have all the trainings always at your fingertips on the left side. You can remove that if you want to see everything. There's a full screen mode if you are really focused on reading things where you can still move to other trainings. And there's a little GitHub thing here which means where you can go directly to the repository open an issue or suggest an edit, and you can download any pages Mark 10 or PDF. And this is the little table of contents I was telling you about that's a sticky table of contents. And there's a little contributing section around here. So this is if you would want to build for example, the documentation that's actually documented very nicely and a couple of lines you can have this set up running locally on your computer. If you have my done install because it is still stinks of course but this is actually all you need to do to run it. And there's also an article right in the whole chapter on teaching. So if you want to read a bit more and how you should teach a class for software development there's a lot of nice interesting information here. Do I need to show anything else. You can see you still have the little notes. And the table of contents disappears once you go down. I don't know. You can browse through and see a little there's little sidebars, more notes images code. And I don't know the size here. I don't know my heart. But trust me when I tell you there's open close solutions. So, that's the little demo of the beautiful revamped training. So, for those of you who did not know the previous training there really were a lot of new features added and a lot of work was done by the training team. Again here, a lot of links where you can find more information the GitHub people the training. Also here if you have questions or issues, please file them at the issues on GitHub. And in the contributing section I just showed in the demo gives you more information on how you could contribute to creating training documentation. And there is on the community, some news items about the overall we did because we obviously wanted to keep the training and structures informed and, as I saw some of you did create new trainings in the new setups or apparently that was working well. So, if you would like to get in touch with training team and I think also docs team now there is a separate discord channel, which I put here below that will lead you to I think clone and training documentation is the name, or just look for training and documentation on the discord. And if you have any questions, feel free to contact us over there. I would really really like to give a special thanks to catch us, Philip Bauer and Steve piercee who did most of the work of making all that happen. They really did a lot of work. So thank you. Thank you very much. I know a lot of people because I know I referenced that training documentation a lot while developing for blown at our university. And obviously, of course, everyone who has contributed to all the trainings that were given this year and all the previous years. That was really a good job. So, one of the last ones. This, this is my opinion. So, this is not the opinion of documentation or training team. My dream, I would love it if in the end, the training and the documentation were able to use the same tech stack. So that it would be combined with all the features that we now have in the training and docusers will obviously also have a bunch of interesting features, specifically made to write good documentation. I would love it if we would just have that in one text text so we wouldn't have to switch over depending on what kind of documentation we would like to contribute to. I would love to see it integrated and follow a documentation system because if you read a bit on the documentation systems in the world. You have this whole theories about how you should separate how to domain tutorials to get started have training documentation which is actually a separate thing to do a whole separate thing from scratch and have people do that and have a little aha moment when everything seems to work on their own as well. So that would be my dream for the, for the future that we find a way to have all the nice features in a combined text stack. So, I'm not going to sing help by the Beatles but help is very much appreciated. We would love it if you get involved if you have opinions or would like to document something or do a training. Feel free to contact us at the discord channel I mentioned, and these are the main, I think the main links where you can find more information on either the documentation or the training. So, please go over there and visit and see if you can do anything. Even if it is just fixing a link would be nice just go over. If you find a link that's not working go over there fix it and it will be there but maybe for the documentation, just a little bit so it actually gets live but these are the main links on how to contribute. And this is just me so in case you don't find a discord channel or anything at the end of the slides you can find me as spear verbi almost everywhere on the web or by my name. Feel free to contact me if you have any questions I will make sure that your questions or comments get back to the correct teams or the right people. All right. That was everything. Thank you very much game. It was nice to have you there. That was that was looking super good. Excellent stuff. So, thank you again. I haven't seen any question, neither on flag either on Slido. So, I guess we can move to the discussion area so you have a link beside with you scroll down in the lounge lounge lounge form interface you, you can see this join GT here. We'll meet you there. Thank you all. See you there. See you there.
For Plone 6 a lot of presentation layers needed some work: - Plone.com/org - Plone documentation - Plone training This is a short overview of what has changed, a demo of how you can now adjust the plone docs and training. And of course a warm invite to contribute.
10.5446/56673 (DOI)
Welcome back. Our next talk is theming Plone 6 classic UI by a couple of Plone theming experts. Peter Holzer has been a Plone developer and integrator for many, many years now. And so has Stefan Antonelli. Stefan's passion is UI and UX design. He's been the author and maintainer of many Plone add-ons, especially related around user interface. And Peter Holzer has been working on many add-ons as well as core parts of the Plone for eons, it seems. So it is my great pleasure to have them presenting on Plone 6 classic UI theming. Hello, everybody. Welcome to our talk about theming in Plone 6 classic UI. Servus from Munich, or at least Munich. My name is Stefan Antonelli. I work for Operon with Plone since ages. And I'm very happy to introduce Peter. Hi, I'm Peter Holzer. I'm based in Zurich and worked with Plone since 2004 with my company, the Data of WebLösungen. I'm part of the Plonemics Alliance and involved in various add-ons, for example, collection fit music and our very own shop package, PGA Plone Shop. Before we show the new fancy stuff, let's talk about history. What is the story behind Plone 6 classic UI? We had several discussions in the community regarding seeming. There was the Plone talking in Berlin. There was a conference print. We had the Alpine City sprint. And of course, we also had some rants about the current situation regarding seeming in Plone 5. My personal opinion was Plone 6 should not work exactly the same like it was in Plone 5. Everybody tried to use bootstrap somehow, mostly without success. So our first idea was to map variables from Barcelona to bootstrap variables. There is a lot of stuff defined in Plone where bootstrap already comes up with styling or components. So the result was a lot of complicated stuff. But we didn't want to reinvent the wheel. So what is a possible solution for that? What if we switch to a framework and change Plone to use it? What we have to do is to update all the markup in Plone in all its templates and just use bootstrap in the future. We came up with a couple of clips. The first clip was during the sprint at the Plone conference 2019 in lovely Ferrara. And the idea was to modernize the markup in Plone Core templates. We had a couple ideas on that. Major stuff is update all the templates in core. OK, we know it's a lie. We never touch all templates. But almost everything we see. One major task and a little bit tricky was to tackle the Formlip C3C form, which is responsible for all the editing stuff in Plone backend when you edit content types or add content or stuff like that. So everything should enable us to write templates and add-ons without writing tons of CSS lines. A few weeks later, there was a clip to modernize the default theme, possible meta. And also a couple of months later, we came up with a clip to modernize the JavaScript story, which is on its way. It's in a branch at the moment, not yet merged, but it will be part of Plone 6. Peter has some slides to cover this topic later on. Make things easier is the headline. And with things, I mean development, templates, all the responsive stuff, the seeming story at all should be possible a little bit easier. Creating a clean and beautiful theme for Plone should be possible without headache. Headache because creating a modern website or web application is a very complex task. Frontend stuff is complex. And since there are thousands of devices, resolutions, things out there, it's really uncomfortable to not tackle that without a framework. It should be easy to reuse components that just work. And with components, we talk about cards, listings, stuff like navigation. User expects things to work in a certain way because there are widely known UI patterns for features like buttons, like hover effects, like formulas. And with Bootstrap, now we have UI patterns that has been tested and evolved by thousands of projects and users. From a developer perspective, there should be one way to do things that is used by all components, core ecosystem, add-ons, and stuff. No need to write custom CSS or add-ons. And of course, you still want to be able to do your own stuff. As a developer, you don't want to design. You want to rely on a design system which reduces complexity and of creating responsive layouts. As long as you stay default, it's fully responsive, by the way. As a developer, you don't want to think about markup. You want to just use components for formulas, buttons, stuff. You want to use markup and apply it. And you don't want to write custom stuff for each template or for each feature you add. Last but not least, you want good documentation. And we don't want to reinvent the wheel. So we say, test it and maintain default components are part of Bootstrap. Fully rely on that framework for a bootstrap. So what's new in Chrome? We have the new default UI, which is Voto. But the classic UI will stay as an alternative for at least another five years. So we updated the markup based on Bootstrap 5. We also updated the Barcelona theme, which got more modernized to the config. And finally, we also modernized our JavaScript stack to make it easier again or at least as painful. This is what it looks like. This is the new, the Barcelona as you know, the modernized look based on Bootstrap 5. So what is Bootstrap 5? It's still the most popular and widely known front-end framework. It's very documented, tested and maintained. There are tons of examples and snippets that you can work on. I have to say, after working with it, we missed out quite a lot of fun. Bootstrap itself for the version 5 also improved the overall look and feel and completely overhauled the grid system. We updated and extended the color system and now added support for custom properties, also known as CSS variabies. Bootstrap now has their own SVG icon library. And they don't use jQuery anymore and switch to vanilla JavaScript. Then this step, they finally dropped support for IE. But you can still add polyfills if you really have to support it. So basically, they got rid of legacy stuff as we do it with Plone 6. Our plan for Plone 6 Classics is try to stay up to date as much as possible with the latest Bootstrap functions. Yeah, let's talk about what we get when we use Plone 6 Classic UI. Let's talk about the futures. First of all, we get Bootstrap markup. In Plone 6, all templates will use Bootstrap 5 markup. All major templates has been updated. This includes the editing forms, the templates for default content types, listings, the control panel, and a lot more. We try to stick as close as possible to Bootstrap, which means some small changes or some details has been changed, like the pass bar. In some cases, there is different UI necessary. For example, the main navigation. We had to change the markup to cover a drop down navigation with more than one level. We absolutely recommend this minimalistic approach also for add-ons. Reels accessing components, no JavaScript is required for something like show and hide because there is a collapse component in Bootstrap. In addition to that, we still have our patterns that are known and used in Plone, like the model or stuff like that. Plone has tons of features. Please grab Plone 6 Alpha and test it. Report box, tell us about missing or unstyled templates for that. Since we have Bootstrap components, we know they work like a charm. Our documentation is now the Bootstrap documentation. Everything you can see there in their documentation section, especially in the components, it works out of the box in Plone. Check it out. Components used in Plone Core are breadcrumbs, cards for portraits, or for example, the Tevular listing. You literally can copy and paste stuff from the documentation and paste it in a document or paste it in a template. And it basically works. Sometimes you have to tackle HTML filtering a little bit. One example is the accordion. We literally copied that over. You have to disable HTML filtering, and everything works like. Yeah, you see it in the example. So what is the new Barcelona method? It's an opinionated set of Bootstrap variables. You can change every aspect of the theme with variables. You have overall properties like shadows, gradients, and round corners. You have generic variables for things like colors, sizes, fonts, and all the variables for very detailed aspects, like the inner padding of your buttons or fields. On top of that, we added a view of Plone specific styles and components, for example, the navigation, content I've used, or control panels, for example. And they all work based on the variables that you define. So this is the theme as we have it now. And we'll show you quickly how those variables work. You can easily change colors, add shadows, round your corners, and as mentioned, those variables change every aspect of your theme down to a field. And most certainly, they work with the components that you copy in from Bootstrap. You can check out the Plone 6 Classic UI on the demo site already. And I will show you now how you can create or theme your own Plone website. So revet seeming is still there, but there's no revet compiling anymore. It is still possible to upload a zip theme. And in 5.2, we reintroduced the kind of custom CSS, which allows you to add some exercise within the control panel of your site. With Plone 6, and thanks to Bootstrap 5, we have also support for CSS variables that gives you quite a range of possibilities to change the look of your Plone site without changing your theme itself. Or the need to recompile your styles. Availability of those properties will evolve with the progress of Bootstrap itself. So for theme development, we focus on file system-based development. The base for files for Barcelona are now also published as the Plone theme Barcelona at the base on NPM. It includes all the variables and properties that define our well-known Barcelona theme. For you to make it easy, we also updated the templates in Bob Templates Plone. And the template theme under Barcelona is based now on that NPM package for your theme creation. This means you have used the same template for your project. This means you have fewer files to look over. You just add your few of your files. And therefore, it's easier to update the theme in the future. So how you create an item back? You create a theme package with Bob Templates Plone. It's the version you need to have for Plone 6. So you create an item package, add your theme into your package, where you have three options. There's theme Barcelona to inherit from Barcelona. There's a theme-based template to theme just with Bootstrap. And without any Barcelona or Plone specific styles, or you have a very, very basic option, just the theme, which is just the basic files that register your theme. And you can start in your own variant. So basically, you install the packages, the dependencies with NPM install. They are defined in the package.json file. You add your own styles, run compile. And have your styles ready to show in your Plone site. As before, the styles are added within the manifest CFG. But you know that. The other is still there. We made some improvements. It's now easier through modifications within the content area. And we move the gridding column definitions to CSS. Now to JavaScript. There is a lot of stuff. So in no longer use required years, so no more have those anonymous mismatch defined errors. We also updated the query. But maybe you won't need it that much anymore with ES6. As said before, we don't have a through a webkit anymore. So you can use any build tool you want. Plone, for example, uses webpack. And the Plone Compile Resources script is replaced with webpack. So to say you register your optimized JavaScript files is as simple snippet as it was in Plone 4. Mookup was also updated. It uses the latest JavaScript syntax and features from ES6. We use import statements instead of require. We use async and wait for lazy loading of dependencies. This reduces the file size of the initially loaded bundle. And all the Mookup modernizations were done based on the modernization that happened within happenslip. So Mookup now really uses the same concept since basically the same. This makes it also easier to combine them. The resource registry was rewritten to fit all those new features or less features. Again, through the web compiling is gone. The bundles are still there, but we don't have resources since we don't need them anymore. This allows you to use your favorite packaging method, which is to reduce the optimized CSS and JavaScript as before, but as parameters. This is what the simplified bundle registry looks like. Maybe I've needed some more reductions, but we'll see about that. And this is the look of the new simplified resource registry. So you'll just see it's your bundle with a few parameters. You edit and you save it. The background of the resource registry is much simpler and less error prone. It's based on the web resource package that we were able to grow. It shows dependency management and conditional delivery of resources. And one other thing is that you no longer have to create a manual time steps. Now hashes are automatically created based on the file contents and used for the invalidation of those resources within the browsers. The hashes are created on startup of instance or on every request to enable the development mode. So you'll find your bundles are the same as before, but in your profile, just less configuration. So one more thing, how to deal with icons. We made the decision to use bootstrap icons by default in Plone 6. A state of the art is to use inline icons to be able to style them. And we needed a way to override and customize them. So let's figure out how we archived this. As mentioned, bootstrap has its own icon library. Current version is 1.6.0 and it grows. They're free, they're modern, they're weightabless SVG, and also as icon font. Check out the icons under icons.getbootstrap.com. All icons are registered under their original names. In addition, we added context-specific icons. It's basically an alias for the original name. It's registered to allow overrides in projects. We have icons for content types registered in icon expression of the content type XML. We have UI icons for stuff like copy, paste, save. We have for the toolbar icons, and we have MIME type icons with a nice fallback feature. You can register an icon for an image, for an image flash JPEG, or if you do something like image slash foobar, it will use the fallback for the image icon, which is nice when you don't want to register hundreds of icons for every image type that is possible. The registration for Blon6 Classic UI is done in static resources. There is, for example, on icons underscore bootstrap XML. Check out the package if you want to see the stuff in the code. It's basically a mapping for an icon to its corresponding resource URL. We show an example here for one registry record. We use the prefix icon or plone.icon. It's followed by its name, in this case alarm, and alarm is the actual name from the bootstrap icon. Resource URL to the actual SVG icon is part of the registration. Everything together, this brings us the icon resolver. That's nice, but how to finally get the icon in a page template, for example. In Blon6 Classic UI, we have our all new icon resolver to look up icons, and technically, it's done as a view, and it's added to the main template, similar to clone underscore view. The icon resolver in this example is a tag method. And it returns an SVG tag that is inserted to the template. We have here structure replace, which directly insert the code that comes back from the resolver. It takes an icon name as an argument. There is also a URL method to return a resource URL if you need that, for example, if you want to do an image tag or stuff like that. It also handles binary files where this is used, and for binary files like a PNG, it returns an image tag instead of, of course, binary code. So this fallbacks are implemented, and there is also a generic fallback that shows a clone icon if the icon is not found. Inline icons or inline SVG icons, because it's 2021, so we want to insert icons, and we want to style them, we want to do stuff like it. This gives us the option to style and animate the stuff we have in the DOM. The icon resolver is also available in JavaScript. This example is from the query string pattern. We have the import, and then we have the use of the resolve icon method. Have a look at the code to see this in action or how to do this in your own JavaScript parts. How to deal with icons in content. You literally can copy and paste an SVG into your HTML. This is not very convenient, but it works at the moment. And a plugin for the tiny MCE editor is planned. It's not that complicated to select and insert an icon that is registered somewhere. There is some discussion in progress if you have ideas, or if you want to give feedback, please ping us after the talk or pop up during the sprint. Some ideas are also to create a setup profile that install an icon font, which is available for Bootstrap, or which is already known from something like font awesome. It's well known. It uses an icon tag and add the icon as font in that case. At the moment, it's an idea, it's not part of the code. For, or to show how this could work, or to show how you can override icons or use your own icon font. We have a package that's called Collective Font Awesome. It is work in progress. So please give us a few days to finish that. It includes an example of how to add custom icons in clone. You have to provide somehow the icon SVG files. You have to register the icons in XML. And there is a script in the package that generates all the XML for the icons based on the files in the folder. So this is a convenient stuff to not write that all by hand. The manual selection for UI or content types is of course your part if you want to replace the original one. You can also make a mixture of different icon fonts if you prefer that. Last step is a setup profile to add an icon font, but this is also part of the discussion at the moment. Finally, we want to show you a quick overview of what else we changed within clone. So typography and overlook of what was modernized. We also reworked the content type templates with the new site with the image on top. We also improved the display of the metadata fields. Events also got an update. And we streamlined the views for images and file views. We updated almost every listing. So there's a few that needs some tweaks. Just a summary view. This is what the table of view looks like. And this is what your typical edit form will look from now on. With the control panel also got an overall. We changed the navigation, used bigger icons. This is for example, the portal actions control panel, all done with the bootstrap components. This is what the user's control panel looks like. And we also made sure that our content is good on mobile devices. So how do you theme to get used out of it? We are going to use the same thing. We already updated the trainings. We already did the trainings on Saturday. So there's now a part for the theme based on Barcelona. There's a part teaming from scratch. If you're going to base it only on bootstrap or a theme with the other. If you want to implement a theme that you got somewhere other steps based on bootstrap. And the documentation will be updated pretty soon. I want to thank you and thank all the people in the community that made this step possible. There's always a big effort and we couldn't have done it without each of you. Thank you. Thank you as well for my side. I have to say talk to us if you have ideas, feedback or whatever you want to talk to. We have here some contact information. Join us for the sprint if you want to get dirty hands. Every help is welcome. Also documentation is welcome if you just want to write down some stuff how it works. Talk to us on the Plon Discord channel. We have a classic UI channel where we meet your regulatory and work on that stuff. So feel free to ping us. Feel free to join us. And thank you and have a nice conference. Thanks, Stefan. Thanks, Peter. Thanks for joining us. That was a great overview of. Plon Classic theming. And it's amazing to me how many changes have gone into it. And these two leaders of. This particular work have been so amazing. It's an amazing amount of work. And I'm looking forward to spending time reviewing what's going on there. And as you know, I think that the Plon Classic is being updated for Plon Classic is so important to Plon. As we have such a large installed base of big sites that. It will take time to get them moving over to Volto. So it's great that we have this option to continue using Plon
We made a couple of talks for Plone Conference 2020. This talk is basically a summary of the story behind modernize Plone's classic UI + Theming based on this work. We're going to show how to create a theme for Plone 6 Classic UI based on Barceloneta and from scratch.
10.5446/56674 (DOI)
Alright, and we are back. Plum Conference 2021 Track 1, Day 1 with Tiberio Ikem, who is a leader in the development of Volto, a thought leader in the development of Volto. He gave the Volto add-ons training last year and this year, which I found fascinating and also brilliant. And he's here to give us a presentation about something else he's come up with, which is the Plugables framework for Volto. Thank you, Kim. Thank you, Kim. I will start to share my screen and give you a presentation. So I'm going to give you a presentation on Volto Plugables, which is an experimental technology that already ships with Volto. But this presentation is more of an argument for extensibility in the CMS UI and in Volto, taking into consideration the shape of Volto right now. And if we look at the too long, didn't read version of what exactly this is, because I am going to talk a little bit more about the abstract side of things about the Plugables. And I don't want to keep you too much in the dark. If we look at what exactly Plugables is, is a way to design or to create Plugable things like two bars where we have on one side we have a big slot, let's say, and on the other side we have something that we push into that slot but out of the component tree. So they're not direct children, but those few slots they come from somewhere else. And the basic is just the Plugable component, which is the slot declaration and the plug, which is the slot field. And this bit of code, the implementation in Volto is based on a reactive view slot. And I'm the author of the sport in Volto. And yeah, we're going to talk about it, I guess. Okay, so now that we got the basics out of the way, we need to understand what else is to learn about them. Right now we have two systems running in parallel, classic, blown and Volto. And at first glance they appear to be somewhat equivalent. But I think Volto is a big evolutionary step for blown. And to get similar capabilities in blown classic, you basically have to rewrite Volto. And Plugables is an example of a kind of thing that marks the departure from a classic static page or static page with GS JavaScript based interaction to really, really full page integration of dynamic components. And if we look at the big picture, as you know, I work with the odd web Romania and our main client for the Volto website is EAP, European Environmental Agency. And we've been working with Volto for the last two years. We are a big contributor to the Volto ecosystem and we are one of the early adopters of Volto. And many of the public sites are already on Volto or they are in the process of being migrated to Volto. One particularity of these websites is that the CMS site is very, very strong. There is a lot of technical content. But we, as contractors and I as a contractor, we'd like to focus more on the tools to deliver that content and to produce that content, not necessarily on just mundane presentation and variations of the same thing. And with Volto we've been able to make this process of publishing environmental data feasible for website editors and not just dedicated contractors, right? That's because we build powerful UIs for power users. And I think by doing so, we maximize the investment of public money. And I have a picture here, just a quick, quick, quick search in the GitHub EA repository on the Volto world. And I get already 91 results for this. So EA already has over 90 Volto things. So that leads us to our concern or our question, how can we scale working with Volto? And you can see our work landscapes. We have multiple websites, we have small teams. And for example, that's why the Adonis infrastructure was one of the first contributions that we made to the Volto project. And yeah, you see all of these Adonis, they're nice, already open source. You can take a look at them. There are full websites already fully published in EA repository. They show everything from the deployment to integration to absolutely everything. Of course, you know, there are many other companies working with open source Volto code. And they do the same. And those are RedPortals, Code Syntax, Robeir, GitConcept, of course, and many others. And if you look at the GitHub readme page for Volto, you can get all of them. And of course, there's also the awesome Volto collectivity poster for that. Okay, so so far, we've scaled Volto with Adonis. But we are already starting to see that some Adonis need to provide extension mechanism. And that's that also applies to Volto. Volto also needs to provide more extension mechanism that then it already has. For example, one Adon, the Volto slate, it has three or four other Adons, Adons to Volto slate, Adon, right? And yeah, I mean, I can show you a ton of ways that we abuse, for example, the columns block or the tabs block, because they provide a genetic data storage mechanism for multi dimensional grids. And we're just looking at extending how that multi dimensional grid looks like it can be an accordion, it can be tabs, it can be who knows what else, right? On the other hand, one of the things that make Volto really attractive is developer friendliness. And I've seen this many times already, new developers can become productive very fast with Volto. So in whatever we do, we have to keep things light and understandable. And we don't want to scare them with things like dependency injection or complicated component lookup in an OPAQ registry. And yeah, we're going to look at scaling up Volto interactions because the nice thing is, bloggables provide a way to scale up these interactions and I'll walk you through a better understanding of this. So let's start with the beginning. In the reactive world, the top bottom approach is really strict components, past properties to their children, children can call functions passed down as props, right? And that's the basis of reactant. In my opinion, it also makes it great because I have experience working in systems with two ways that are binding like Bougie as for example. And I know that we as developers, we can create a mess with it. We will arrive at a situation where we don't, we lose track of what happens in the system. But we need communication between different component trees sometimes. And for that one, the traditional solution is something like redox or react content context. And there is no magical solution that you can just say, okay, I have this component and anybody else can communicate with that component. And when I say communicate, for example, I mean, for example, changing the inner state of that component or calling something that's private in that component and so on. So this makes the components frozen in very implementation. And yeah, there is no protocol for ad hoc communication between components. And we have to take into consideration that the UI state is fluid, the global state of the application as represented by the single page application that is volatile is always in flux. And it doesn't just model real data, you know, long content coming from the server, but also the state of interactions. And trying to model all these transient things as configuration would be really hard and we'll just end up relying on documentation on look up keys, you know, it's just going to be messy. But extensibility enables also scalability and reuse. And because of program configuration, voto blocks can be recycled. A new view template can reuse the block data to show things in a different way. The variations can extend the block schema model to add features to a block. And these are things that we have recently added to both or they were there in both or but we just made made it more visible and we made made it more frameworky, let's say. So yeah, but again, this this sort of extensibility needs configuration. And configuration means designing an API and extensibility model upfront. In Plone, we have the sole component architecture and with it, this, you know, need for designing an API upfront is reduced and there's already established patterns and best practices for this. Because when programming when programming Plone, the sole component architecture is its most basic language. Everything is a component. So component I mean, and writing an interface and then adaptor is the most natural thing in blown. And because of this, we're pretty much guaranteed like ability everywhere. But the bulk of the ability needs are not just your visual but also something that takes into account interactivity because it's a user interface. It's not just a static html server from back end server, right? A good example of plugable UI in Plone is the bullet manager. You declare it once you include in the template and it will render things inside of it. So that's why we can look at the plugables as being on demand, let's say. But they are not. We've been doing websites with Volto and I mean, okay, bullets didn't exist in Plone since the beginning. And they don't exist now with Volto. And we've shipped many, many websites without them. So it's not like we're missing the viewlets and we cannot continue without them. This is way beyond that traditional use case. Okay, so just to make things relatively clear or just to give you an analogy for existing Plone developers, we can say that the plugable component that is declared in React is plugable with a name would be mostly equivalent to a browser viewlet manager, right? And then once we have that browser viewlet manager on screen, once we have the plugable, we can then use a plug like a viewlet to push content into that plugable. But you know, Plone's plugables and viewlets are static and you can trace the request publish cycle all the way to CGI in the beginning of web apps. Request response, rinse, repeat, right? And with Volto being a single page application, the whole application state is continuously shifting and mutating and to repeat again the interactivity in Volto needs to go beyond, yeah, let's just display this additional thing here. It needs to allow intercomponent communication passing down props out of three and because it can do that, it can act as a generic framework and React already provides, let's say, out of three rendering of components, there is already a portal in React DOM, yeah, we're not talking about that because that is way too hard to control. Okay, so to give you a simple use case, like more practical use case, we can have the plugable with the above document content and we can have a component, right? We can say, okay, we're just going to show a list of my online friends just above the document content and that could be provided by an add-on that can do that. So we would render the plug. It needs to say where it should render, which plugable it should be and we can also pass an order argument so that we can control somehow the order in which they appear and that's similar to Browser Tulate. And of course, the only thing we have to make sure is that our plug component is rendered and for that one, the most basic thing that we can do, for example, is to use config up extras, which is kind of like, let's say, head slot or something that Volto has that just that will just render everything that's in that array of configuration. But there is a vast use case. For example, a plug can override another plug. And if we have a plug with an ID, the button, we can then render another plug with the same ID and it will override the initial plug. And we might get into sort of, and I'll explain a little bit later on how exactly this implementation of plugs and plugables works. We might get into situations where we are fighting, let's say, for which plug is rendered and for that one, we have a mechanism of controlling the rendering, which is similar to reactive effect dependencies. But the nice thing is we can actually read inside our plug, we can read data that comes from the plugable. So imagine that, let's say, we have a toolbar and that toolbar has, in its context, in its closure, it has some variables, it has some data. With the plugs, we can actually read that data coming from the plugable and we can interact with that data. And not just data, but also, for example, we can pass functions, callables, so that we can have plugs that send messages to the plugable, right? So it can, basically, we can create with an add-on something that mutates state into another component that was not explicitly designed to allow that in sort of interaction that the add-on provides. And another thing that I mentioned, but I will mention it again, and we have the example just because we, this presentation can serve as a reference. The dependencies of plugs can be used to control when the plug is rendered and re-rendered. Just to make sure that, for example, our plug cannot, for example, be overridden or when some data is changing in the system, our plug is back again online, let's say, or rendered. Another advanced use case is, for example, that we, in the plugable, we might intercept the plugs and we might have them rendered in a particular way. So, for example, in the toolbar example, we might say, okay, I'm going to have plugs, I'm going to support plugs and they're going to be kind of like buttons, and I'll just track them into a particular wrapper or do anything with them. It's up to the plug where it renders, it's up to the plugables, sorry, where it renders the plugs. And I have this plugables framework is involved, I've mentioned, it's not actually widely used at this moment. And we are even learning yet how to really take advantage of it. And that's my way with this presentation of trying to make, to popularize this technique and make it more public so that we, we, Voto developers are aware of this feature and we can include this framework so that our work is futurely extendable. So, what I'm showing on my screen is two add-ons. The first one, it renders the 15% first draft button on the left. And that comes from Voto workflow progress. Now, Voto workflow progress, you can look it up in the EA GitHub repository, it has a plugable. And then there is another add-on called Voto editing progress that uses that plugable to render the timeline control on the right, the dots with the published and in publication and so on. So, that's the sort of, and it only shows up when the button on the left is hit, right, or you're with the mouse over on it. So, it's an extension of an extension, practically. And for example, another use case, and this is one that is, let's say, close to my heart, is the quanta toolbar. And that's because I anticipate, and at some point, Victor will show in this conference what exactly quanta is and how we envisage that it will behave and look like. But yeah, we can look at this and we can think, we can see that, for example, the toolbars are very much study case of plugables, because they are highly interactive and toolbar as the placeholder, they're what on what they show, right, which buttons they show, they are highly dependent on the context. And for example, with the quanta toolbar, the convention is that each block has a single toolbar, which is not the convention right now in Voto. And if we make sure that there is always a toolbar, we can have any block render something in that toolbar. And then we can enable something like a really advanced scenario where you cannot really, really control what exactly is shown in that toolbar. For example, when we take the columns block or the slots that I'm going to talk about, they have to be able to reuse this toolbar. And more or less, we don't want to hard code all of this and push this into a giant configuration registry where you would get lost. At least we give it a try. Maybe we'll have to fall back to that. I don't know. But at least we can give it a try. Okay, so another example, for example, is the Voto block style, which I have shown at some point last year, and we are using it on our websites. And that add-on has to show a palette icon and with it, it can provide wrapping and styling for the block. If we would have a permanent toolbar that we can rely on, then that's hardly placement at the right top would not be needed anymore because we could just plug into this toolbar and so on. There are some limitations that we have to be aware of with Voto plugables. For example, there is no server-side rendering support for them. And I don't think that's a problem because we have slots for things that are more configuration, let's say, and server-side rendering dependent or where they are really, really needed. I see plugables as more for transient things like buttons, toolbars, and so on, which we don't really need server-side rendering. I've already mentioned the dependency list that you have to be aware of and, of course, the limited adoption that we have right now. And I promise that I'll go over the implementation and just sew the code more or less because I don't want to go into too much detail. And you kind of have to know a little bit of React to be able to follow. Basically, the plugables provider is a context that we wrap all our Voto content with all Voto components, let's say. It sits right at the top of the Voto rendering tree. And it provides a context and it has an inner state inside. And the plugable component will be rendered as a child in that context. They will have access to that context and they can subscribe once they render. They can subscribe and write into that global context. The same plugs can do the same. So you can see here, for example, the dummy implementation for a plug. When the plug renders as a side effect, it can call a function to register a callable that will return some children. Yeah, I mean, it needs a little bit of study. This bit of code is not that complicated, but of course. And the plugable becomes a little bit simpler because we can just grab a get plugs function from that global context provided by plugables provider. And it can grab all the active plugs and just map over them and render them. And with that concludes my presentation. Thank you for attending and watching it. Thank you, Tiberiu. I'm always in awe when I hear you working ideas out and figuring out how they're going to turn themselves into production ready code. And it's really been fantastic to have you working so hard, so heavily on Volto and helping the Plum community move forward. So thank you very much, Tiberiu. And for those of you who are watching, please join Tiberiu in the JITSEE, which I hope you know by now is the blue button underneath the video frame. And he'll be happy to answer your questions. And maybe you can help him develop the idea some more. Thank you again. Thank you. Thank you, Kim. See you.
An introduction to the new Volto developer-targeted feature, the Pluggables framework.
10.5446/56677 (DOI)
No, I haven't started the screen sharing yet. Good morning. We are back for Plone Conference 2021 day two track two with Philip Bauer. Who's a friend of mine? Well, at least I hope he's a friend of mine. We are back for Plone Conference 2021 day. Well, that was embarrassing. But not more use, not any more embarrassing than the kind of things that Philip pulls in his live demos and is coding. No, actually, I'm kidding. Philip is, as you know, a leader in the Plone community developer who got Plone migrated or upgraded, who led the effort to migrate it to Python three. And he continues in his efforts to make the migration even easier. As you know, he's also the creator of the mastering Plone class. And so today, Philip is going to be talking to us about Plone relations, which are a very powerful feature of Plone. And I'm very pleased to have him take it away. Thank you, Kim. Thanks for the kind introduction. I'll not waste any more words about myself. I think you might have been there a year ago, where I gave a lightning talk saying why relations are awkward or weird, something like that. By the way, do you like my hoodie here? The Plone hoodie just arrived. I just before this talk I had to jump on the bike and go to the post office to pay my taxes for that. Excellent. So relations. Why, why did I change my mind? Because people helped me to improve things in such a way that my mind got changed. But before that, what are relations in Plone? I want to read you this. In line one of the read me of that's a relation that's like the core of it all. It says the relation catalog can be used to optimize in transitive and transitive searches for an area relations of feet finite preset dimensions. Okay, I guess would be my reaction to that. And a bit later in line 1,967 it says in the same read me. Okay, the next part is where your brain hurts. Hang on. It continues from there and I say please make it stop. So, why am I saying you this? Because relations in Plone are incredibly powerful. And they can, this is a very simple example of what they can do. They can model. This is a family tree and you can ask the relation catalog for all children and grandchildren of Harry and you get the appropriate results. It covers circular relations and all these words that I actually mostly don't even know the real meaning of. And the thing is, Plone doesn't use any of that. It is like a lacking a good comparison, but it's like a toddler in a Lamborghini. The related relations in Plone are mostly only used in three places. One is the related items field which you can see in the screenshot in Plone five and in Plone five, six in Volta. And there is a relation list where you can link to arbitrary items like a see also thingy. And there is also two additional relations that don't have a corresponding field the working copy relation where a working copy reference references is original or vice versa I actually don't know. And the link integrity relation called is referencing which links all items that are linked to each other in the HTML link, if you use a internal link that doesn't only create the link in HTML, but it also stores a reference that is used to give you a warning when you delete something that is referenced from another page. So, these two tools, by the way, have their own API and they all work fine and we only have this one field, but in relations are much more complex than that. So, there are a couple of things that are needed to build a single relation. I'm not going to read the names of all these packages. A couple of you will know some of them by heart some of them are less important for relations some are more important, but with all required plus all the dependencies which is basically the whole stack. So, the situation is, yeah, is pretty complex and the question is, is that okay. The answer is yes. That's actually totally fine. And we can, we can consider that as a interesting tidbit of information that you can just forget from now on, because it's, it's not going anywhere it is good because this stack is stable. It is super fast. It is extremely flexible way more flexible than Plough and Meads, and it actually is documented even though I would make a question mark there because I'm not sure about you but I didn't understand what I just read to you in the beginning of this talk. Gary poster. If you if you if you see this at some point, what the actual fuck are you trying to tell me. So, why am I giving this talk. There must be something new otherwise I wouldn't do that. There are four things that are actually new and I want to show them off to you and tell you how your life will be much easier in the future, or from now on or actually a couple of months in the past. So, the same Python API for relations number to a control panel to manage inspect and rebuild relations, three helpers to do exactly the same for better and more relation fields. So, number one is a sign Python API for relations in Plough and API, starting with version two dot zero zero alpha one just released a couple of days ago. The Plough API package has a module for relations and it has basically crud get create and delete. And here are two examples. The first one creates a relation to a source. How does that know that this is get sorry it's not created sketch was wonder because that wouldn't work for a create obviously because the source is missing. This returns a list of all relations where Bob is the friend of someone someone so whoever considers Bob their friend is being returned as a relation in this case. So, good thing about this API there's a lot of many, many, many good things but it only returns those relate those relations where the object is actually something that you can have the permission to watch, unless you say unrestricted equals true then it does something like unrestricted catalog search, which returns brains for objects that you're not allowed to see. The other example is it returns all relations that are going out from Alice, no matter which one if Alice has a version, a working copy if it has related items, and so on so all of these that are going out. Another example for the create method is this creates a relation with the name friend from Bob to Alice so Alice is now Bob's friend, and no matter what kind of field Bob has so we blown bill is built on on schema driven content. So, if there is a field, for example, a relation list or a relation choice on Bob, then, and the fee, this field is called friend, then the field is updated which means either it is Alice is added to this relation list, or if it's a relation choice. Alice is set as the relation choice and if there was something else that was is going to be replaced. And if there is no field, then just the relation is created similar to the working copy or is to the link integrity relation. There's also delete. The first example deletes all of Alice's friends. And the second examples, deletes all enemy relations in the whole database, no matter from where or to whom. So that is a really, and again, in this in the delete example, the, the fields that are corresponding with enemy with this these relations are updated lists and choice fields. This is supported support only Python three, it will be shipped in plan six you can probably use it in both plan five to in if you run Python three. If for Python five for plan five projects, you should consider using collective relation helpers which has the same API methods with the slightly different, they work slightly different that it doesn't have to lead. So this is called get relations for get and link objects for create. And the methods are very little but we had a couple of discussions before finishing the clip to add this to clone API, and the, the better solution is now in a plan API but I didn't want to change the collective relation helpers implementation, because I'm using that all over the place. So number two is a control panel for relations. So, in, there's also a clip for the other one was a clip this is also a clip. It is looks like this so they're in plan six you can have a new action in a new link in the control panels, and you can go there and I'm going to demo that quickly. I can manage. So, this one. So here, inspect relations is the is the control panel, and it gives you a drop down list of all kinds of relations that exist in the database at this database has a couple of relations, including. So, let's inspect project lighter which is like the leader of a project, and it shows all project lighter relations and you can. So, this project has this project lighter. That is nice and good and there can be multiple project leads, but it can also show that as back relations or switch the logic around the display logic around, which shows you. This person is the project lead for these projects. So that is, in this case, a more interesting relation display and you get more information out of that. The control panel has a couple of additional features, because it can allows you to inspect all relations in one table, and it has a magic button at the bottom and if you heard any of my talks about in place relation in place migrations ever in the last two years. So, you will realize that this button can be a total life saver not clicking here, even though this is just a local instance, but it'll probably take a while, because this will purge all relations from the database. So, you can store them in an annotation on the portal, load this annotation and recreate all relations on their fields populate the appropriate choice and list fields, and all the other relations will be restored. So, why is that interesting, why is that important, going back to relations. The reason why this makes sense in some cases is that if you have a relation field. If you have a relation to some other object, then this relation value is added to that field. And that relation value is not only in in that field, which is just in that list, for example, for relation list fields, but it also has a parent pointer to the set object. So, in the B tree of the relation catalog. So this object the relation value has to reference is referenced twice, or has two references, and has references to objects. So, one of the tricky things there is when you migrate from archetypes to dexterity, the relation values have, have, still have references to the arcade old archetype object, which prevents the, the ZODB packing from garbage collecting these archetype objects you so you have dead ish archetypes type objects, living in your own space. And you can, but you can easily get around that by clicking this button. And yeah there is also added complexity within IDs and stuff like that. I don't want to get into that. You can just use that button and a lot of your problems will be gone in the future, probably in relations with regards to relations. So, again, in plan six, this is built in in plan five you can use collective relation helpers. It has exactly the same collect user interfaces, just not a it's, it doesn't register its own profile so it doesn't have a control panel link, which is not really terrible because you can just remember the URL for that. Number three, helpers to manage inspect and rebuild relations 16 minutes okay I'm, I think I'm good in time. So, the control panel uses obviously uses a lot of logic to do that stuff to export store restore rebuild link stuff, and all of that is exposed in a module called relation helper in CMF blown, and then has these handy methods that are written in such a way that it's not a the internal API of a tool or something or crazy adapters they are super dumb methods that you can easily use in upgrade steps. For example, you can just say rebuild relations, and you don't even have to pass anything it just works and it repels relations for everything it gets the portal, and so on as a full. So that is, that's very useful, for example as upgrade step in an upgrade step. Yes, and against the same methods for plan five exist in collective relation helpers, but there is more. The fourth thing is better and more relation fields. So, the default relation fields in blown look like this, and they can be configured to do whatever you want so the first one is an example of just a simple relation list field you can link anything. The second one only allows documents the third one only allows is a relation choice that only allows documents. Third one, that the visual difference is almost none. But you will, you will, I will show you in a demo in a second, and the fourth one is the relation list field in search mode. So, yeah. I will show them how they work quickly. These are not new by the way, these are just different. I'll get to what that all means in a second so this is the normal relation choice relation list where you can add as many as you want. You can pick just what you want relation choice only for documents and events only for documents in this case only allows one item. So you see that everything is grayed out except for the one item and relation list field in search mode. It's very interesting if you have a huge site and you don't want people to navigate to where they need to go should go, then you configure it to use the search mode, and it shows you nothing obviously. Why does it show you nothing. What's wrong here. I need to reload this thing probably used to work a second ago. Let's try that again. Whatever it works, believe me. I'll get to how how they are configured in a second. So, yeah, and the next ones are more and more interesting because these are, if you have a relation, no, let's sorry, let's go back to that and look at the example. Here where I swear you see the code for that, because some of them. No, sorry, I think I messed that up. Yes, because I have that here in a second. Good. Yeah, never mind. Please don't. Okay, these is that's the default relation field, but from now on, or a couple months ago, starting then you can use alternative widgets for relation fields. So this is a select widget the first one, a radio buttons. And then you can use a select field from a name vocabulary which you configure for the checkboxes, the Ajax select field which is useful because it doesn't do that many queries and you have to type at least two or three characters, however you configure it before it shows you any results. So and this is how the whole thing works. These are the code examples that I wanted to show. So this is very simple example of a default relation field. So the very, very, very, very first. But it's, it's constrained to document in an event. This is a more complex one, which is also constrained to speakers and has a optimization that limits the catalog query. So this is a relational list for that. It only shows you speakers, no matter where they are in the catalog, because in line 19 you see the widget is configured to use the search mode. So once you click on in on edit, it lists you all speakers, no matter where they are in the page. The problem being, if you click on browse then there is these two buttons, you can navigate through the page and your users will be super lost very easily. And that is where this comes in very handily, because there is now a new static catalog vocabulary you see the important line one. And if you configure your field with the static catalog vocabulary, you can pass a catalog query, and use any kind of widget, which was not possible before. So here you get a select widget, and it only shows documents in the events that are published, and there is no way that a user can start navigating and searching and getting lost in the whole thing. This does exactly the same, but with a radio widget. And it works beautifully I just, I wanted to show you the. This is the wrong. Nope. Here. Yep. So here is the radio buttons. Checkboxes are relation list with items from a vocabulary you can select multiple ones. And this is the Ajax select field. So if you click there. You don't need to enter at least two characters event in this case something is seriously wrong with my demo instance here, but still. So these are the examples that I wanted to show you. This is the the Ajax select widget and you see a couple of tidbits that are interesting in line 21. You can configure the displayed text, because in some cases, you have a lot of people who are called Miller for example, or Smith. And if that's the title or the the title thing of your object field, then you will have the same results in your in your select or in your radio buttons the ID is obviously different because you use the UID for that, but you will not be able to differentiate between these different people. So in this case, we use the type and the title at path so you can path the create a something that is good and uniquely findable so minimum input length is to and the Ajax quite wait 50 milliseconds after typing to make a new query if you have huge databases. And make make a real difference. So did you memorize all of that. These are mixed screenshots. I hope not because you don't have to. It's all documented. The most important one is the example content type. I have a package that I created a while ago, and it has has examples for all field types that exist in clone. And it has it from since a while ago it has examples for all these field types, and you can copy and paste what you want so you don't have to memorize all of that, including all the some of the fancy options here for example the two milliseconds. No here 500 milliseconds, and all the other alternative widget ways that you can use here. And it's also documented in the mastering plan training. So there's a whole chapter on that and there is a couple of examples that you can copy and paste from. So, don't no need to memorize all of that. There is a little more. There are relation fields without relations I just told you that there is this parent pointer on relation values. So the problem with relation values is not only that they can mess up your migrations is that the parent needs to be an object. But in the portal registry that doesn't work if you have a registry revalue that you want to set in a control panel. You don't have a object on which you can that you can assign as a parent, or it doesn't even have a UID that you can point to from where that because there is just no location, no real object. So, the way around that is to just store a UID instead of relation value, but in many cases, you still want the related items field widget that's the name of the default related items widget. So you can do that. If you use this pattern, it's just the point here is just use choice instead of or list and then value type choice. You can get the same user interface, but it doesn't store a relation value just stores a UID. Obviously it has a downside, which one of the upsides of relations is when you delete one of the objects, blown automatically goes ahead and deletes the relation from the other object and make there's some cleanup magic that doesn't so you don't get in inconsistent relate catalog results from the relation catalog. This wouldn't happen here obviously because you're just storing a string so it's kind of dumb, but it works with the user interface and as long as you don't manage your whole application around that that shouldn't be such a big problem. This is another example of a UID field in this case schema list and you pass a vocabulary. Again, you use static catalog vocabulary to use a different widget checkbox field widget, and you can easily use that in a control panel, or on non content ish in non condition content ish places. So, what about Volto. That's kind of the elephant in the room I didn't. There's always a but the related items field so the one relation field that we haven't known the only one that we have is not displayed. So you can edit it but it doesn't have. Yeah, you don't see anything. If you have a relation choice field, it is broken I thought sort of Tiberio Tiberio tried to do something yesterday I'm not sure if he's already finished and none of the new widget types actually work so you can't configure a relation field to use checkboxes lists, obviously a normal select field like relation choice also doesn't work. So what the actual fuck I created tickets for that, like the first ticket I created the end of last year and created more tickets spring this year. I'm absolutely happy to work on that together with all of you. And there is a couple of do to do that we need to fix to make all of that work seamlessly in Volto. So what I'm trying to do is obviously to have some component that is beautifully reusable to display related items so that the default related item feel on the content in clone that does not use only blocks because we have some types default types in clone and only documents use blocks by default, all the others are schema driven. So, and also documents are schema driven. So related items somehow should show up. I'm sure that is a tricky UI issue and I'm happy not to have to solve that myself. Yeah, someone we should solve that at some point, then the relation choice field obviously need to be fixed needs to be fixed. There needs to be support for the alternative widgets for relations fields so that you can use radio buttons and checkboxes. And I have to navigate through the whole portal in the really, really beautiful relation widget that Volto uses the you see, you know, it's the same as the image picker and the link picker. So that's really, really good. I'm happy with that. We also need support for simple drop downs radio buttons and checkboxes and I know Volto can do these because the search block which I love actually supports those they have it has those and maybe maybe maybe we actually need a endpoint in Plon rest API to query relations because a lot of the data in Plon is schema driven, and you can create like these, the family tree in the very beginning but also other business applications. You can build them on top of relations, and it will be a shame to not be able to use this data that already exists in in Volto. I know relation fields are serialized already, but it will be good to have, maybe good to have additional support to query relations through the rest API could be based on the methods in Plon API or whatever open for any kind of discussions there. Thank you very much for your attention. And also let me extend thanks to everyone, all the people who worked on that. And also Rachel, for most who got work with me for at least three days to get through all these data manager craziness and created the static catalog vocabulary. And Rhys, who took my code from collective relation helpers and made it sane in Plon API and Pitchesie Julian who helped him to do that. Thanks. And I'd be happy to discuss anything and only all kinds of questions in the jitzy and I have no time, no idea how, how, how we are with time. We've actually got about 15 minutes left. Thank you, Philip. Yeah, that was brilliant. There was some discussion in the slack about the code of conduct but I'm sure you're going to be fine. Can slap me around the trout. Oh, please. Yeah, you know here in Wisconsin it's, it's sturgeon. If you can, if you can find a six, sorry, a two meter sturgeon and hit me with it then I think that would be wonderful. The relations control panel is fantastic. It looks brilliant and I loved how you correctly pointed out that if you just click that one button all your problems will be solved. All right, well, thank you again Philip, and you're going to be joining us in the jitzy which, of course for you viewers is the blue button underneath the video frame here. Thank you again Philip and I look forward to seeing what more you do and additional lightning talks I'm sure you're signing up for all of you including Philip, right? Awkward silence.
In the last couple of month working with relations in Plone got a lot easier. In this talk I will present these improvements and show you can use relations to create a data-model that uses relations without shooting yourself in the foot.
10.5446/56538 (DOI)
Ladies and gentlemen, I would like to talk about the digital transformation in higher education and outline a future of teaching and learning. This vision is based on recent developments and discussions in Germany and worldwide. For a long time, e-learning has been the framework, the paradigm for learning with digital media and technology. Here, the focus is on how to teach with digital media. It refers to the distribution and presentation of learning resources and communication technologies. Typical scenarios are the enrichment of traditional courses. For example, with learning videos, quizzes or collaboration tools, the so-called blended or hybrid learning scenarios, as well as online or distant learning, for example, like massive open online courses. Recently, the discussion about teaching and learning in a digital age is shifting more and more from this idea of e-learning towards the question about what to teach, about the content and competences. This focus on competence-based teaching and learning has become very important for higher education institutions in Germany. There is a lively discussion on the question of how universities and universities of applied science can prepare students for a new job market, a changed society, as well as to train future researchers able to use digitally supported methods. These so-called future skills are much more than just digital skills or skills for using technology. They include the four Cs, that is, critical thinking, creativity, collaboration and communication, a lot of other general life skills, as well as new literacies. One example of this kind of literacy is data literacy, that is, the ability to read, work with, analyze and argue with data. Very important, these skills and literacies are not restricted to certain disciplines, such as computer science, but seen as competences every student has to develop through their studies. Universities have now the responsibility to define these skills in their specific contexts and integrate them in their study programs. But we also see the challenge that this focus on competences needs a much more flexible approach, a much more agile approach to study programs, which we refer to as curriculum 4.0. Online and blended scenarios now can help to support the respective competences and should also be aligned to the learning outcomes. The second aspect of future teaching and learning focuses on the learning environment, or the culture and the university of a place of collective learning. Today, many universities understand themselves still as brick and mortar institutions. Face-to-face interaction is highly valued and regarded as a sine qua known of learning. Thus, the digital transformation understood as online and distant learning is seen as a threat to this ideal. But in the future, this assumed dichotomy between on-campus and virtual learning won't exist anymore. Universities are places of collective learning, on and offline. Courses are available in different formats, adapted to different learning types and needs. Digital technologies enable personalized and adaptive learning. And the role of the teacher has changed that of a mentor who accompanies their student on their individual learning paths. But individual or personalized learning does not mean to learn alone, since interaction with teachers and fellow students is still very important and should be made possible with new learning and space concepts. Here, East Asian countries have been the forerunners in establishing new flexible and attractive learning spaces. In Germany, many universities have followed in developing innovative concepts and thus establishing the university as a place for learning. Students will come to the campus not only for their courses, but to learn alone or with their study groups. Two examples you can see here with an SRH Hochschule Heidelberg, a private university of applied science, and the Library of Technische Hochschule Mittelhessen. We also see more and more universities setting up maker spaces where students can interact with the general public for learning projects. This new learning environment or culture I outlined here also has implications for teaching and teachers in higher education. In Germany, we see a chairing culture emerging with a growing community of open educational resources, short OER. These are learning materials like texts, videos, or whole modules which are shared freely and which everybody can use and debt for their personal needs, further develop and redistribute under the Creative Common License. So not every teacher has to produce his or her own teaching materials, but can be part of a wider learning community. Sharing learning resources means also sharing ideas and many universities are interested in this form of collaboration on an institutional level. For example, all eight universities and universities of applied science in the city of Hamburg initially took part in the Hamburg Open Online University, a platform for OER and free online courses from all kinds of subjects for their students as well as for a wider audience. Last but not least, a new learning culture and a focus on skills raises the question about examination. Will we still need them in the future? And how will we monitor student success instead? Learning analytics might be an answer. Learning analytics is a process of using data to improve teaching and learning. It refers to the measurement, collection, analysis and reporting of data about the progress of learners and the context in which learning takes place. It can be a tool for quality assurance and quality development since it monitors student outcomes. It is used to reduce dropout rates by identifying students at risk and intervening with advice and support. Learning analytics can furthermore be used to assess different learning behavior of different learning groups and enables the use of personalized and adaptive learning settings. In Germany, many universities discuss learning analytics, but German students often oppose this new kind of technology. When they believe they are being monitored and assessed on basis of this data. Indeed, there are many privacy issues, issues of equality and ethical questions to consider. It is important to note that data collection and the data collected for learning analytics does not provide the full range of information on student success. Learning analytics is more accepted when students have control over and access to their own data. So, basically, a student-centered learning analytics approach. And overall, learning analytics should not be used to monitor students, but rather to enable them to take control of their own learning themselves by giving them information on how they are progressing and what they need to do in order to meet their learning goals. And this in real time. Just this way, learning analytics can play a huge role for a new learning culture in higher education. Now, I have just outlined a vision of a future where the digitalization transforms higher education in what and how we learn. But we will still need to take a look on technology. Such technologies will shape higher education in the future. Especially in the U.S., but also in Germany and other countries, we are beginning to see elements of artificial intelligence and machine learning technologies emerge throughout higher education. For example, in higher education, in learning management systems or in the form of chat box that interact with students. Furthermore, higher education is experimenting with extended reality technology, such as virtual and augmented reality. These can be used as learning vehicles, blending physical and virtual world, or providing a fully immersive virtual world for learning. It is to note that these technologies are still in a proof of concept phase. Right now, these technologies have no strategic significance for higher education management. But various disciplines experiment with these technologies and tackle the question, where they can add value to their respective teaching cultures. Now, we come to the fourth and last aspect of the future of higher education. We will have to rethink the structure and role of higher education in a digital age. Since the digital age will be an age of learning, already today, lifelong learning is a key for a successful career. And higher education has to adapt and offer more heterogeneous ways and learning paths. Already, we see new demands, such as more confidence-based courses, which I discussed earlier. But especially people are demanding more and more smaller units of learning and certifications, which can be seen in the rise of so-called micro masters or certificates and diploma of advanced studies. Such unbundling of credentials will become one new possibility for learning in the future. A study in Germany called, appropriately, Ahead, outlined four models of four paths higher education might split into. So there is no just one way to study with several next to each other. These different learning paths you can see here. We still have traditional learning paths with an uninterrupted study phase, either before or after a working experience, but also much more flexible and personalized study structures, where a degree consists of different paths from different institutions and even countries. So, let's sum it up. The discourse on digital transformation in higher education has shifted from e-learning or is shifting from e-learning, that is, the more distribution and presentation of course material through digital media, to the four key aspects I outlined. This is preparing students for a digital age by a more competence-based education, establishing a new learning environment or culture with the help of new technologies such as artificial intelligence or augmented reality, and last but not least, to changing the structure of higher education for the new demands in a digital age. So what we learned here is that first, the digital transformation will change higher education profoundly. And that means, second, for university and faculty leadership, that they have the responsibility to shape the future of higher education correctively by providing and setting up appropriate frameworks in their institutions for this new culture of teaching and learning.
In dem Video "Digital Transformation in Higher Education - The Future of Teaching and Learning" geht es um die Digitalisierung von Hochschulen und die Rolle Lehren und Lernen im digitalen Zeitalter. Die Referentin Dr. Budde geht dezidiert auf die Herausforderungen und Chancen digitalen Lehrens und Lernens ein und skizziert die notwendigen digitalen Kompetenzen im gegenwärtigen und zukünftigen wissenschaftlichen Arbeitsmarkt.
10.5446/56062 (DOI)
I think we're ready to start. We have a one and a half hour program and it will be split into basically four sessions. The first session is just the introduction. I will just do a quick introduction. We do a little poll and then we are happy we have two presenters, both from Netherlands and I bought a bit special in different things and super interesting talks. One is on the lessons we can learn from the past about biodiversity and extinction. And then we will talk about the relationship between biography, geobristics, and overlap, etc. He will talk a bit more general about diversity and modeling diversity, not only biodiversity. So I just want to emphasize that we are recording this meeting and we typically release videos. So not on YouTube, we try to avoid YouTube as much as possible. So we released it through this German TIB portal, German Institute of Information Technology and this video will be available on the CCBI license. So just to emphasize that we will discuss in this webinar several discussion points and some of them a bit controversial. And for many things we don't have an answer. That's why we want to connect with you. I want to see people. So there will be some questions on what is our biodiversity debt? I mean, how bad is it? And whether you believe that some things work or they don't work. And so we have this questionnaire and we would like to also find out from you very quickly what is your opinion and do you believe? So there will be some questions about you, just to check what's the state? What's the confidence you have in some topics? And then there will be opinion questions. And then after that we discuss this fourth session we will discuss and we will just have an open discussion and we can then zoom in on things. And we want to see the results, how you voted. So usually gets very interesting. We did a lot of of these science webinars. So we know that of course how to prepare spicy questions and you will see it will get very interesting to see results. And sometimes the most interesting science is the things that surprise you, of course, not the ones you can predict. In a nutshell, what we're talking about here is that we have this our planet and let's say there's a biological evolution that lasted let's say billion plus years. And we came up to human species, this evolution came to the human species. And so what happened is that if you look to evolution, you have many species kind of interact there in a connected I mean, almost all species are connected in the whole planet. Some are more connected, some are less. But you have also this predator-prey relationship and people discovered in biology that is you have the predator-prey relationship that it kind of leads to some it's a semi-cautic but also can lead to eventually leads to very stable pattern. If you plot it, it's something like this. So it's actually beautifully harmonic pattern where you have the predator-prey being, you know, in going in a circle or some sinusoid function. And then you have the people, people pop up and what happened is that we started catching all this the mammals and all the animals that we needed to survive. And we started catching and we slowly we were catching all the animals and so-called mega biota and the size of mega biota that remains becomes smaller and smaller. Basically we start catching the biggest one, the mammals and the biggest ones. And then we got rid of, caught out all the others and then we go for the smaller and smaller and we do it both in the seas and in the land. And so what happens, you don't have any more that curve, the predator-prey curve. The curve goes only one way. It's not more sinusoid anymore. What happens is there's more and more and more people and we live also longer. Even though there's differences, okay, people in Japan live the longest and in some other places they live less. But what happens is that, yeah, the things are really degrading around us and we are not in that predator-prey paradigm anymore. And as you see, we are growing exponentially and the human population and they're still expected to continue growing exponentially. It will kind of, we are hoping it will slow down to about 11 billion, but it will still continue growing. But so that's the kind of the, you know, just a general introduction to the story. And so now that something that pops up when you talk about biodiversity, so one of the things we could ask ourselves as a scientist trying to be objective, you know, we call this COVID pandemic, it's a global pandemic, but the question is if you look objectively, maybe we are the pandemic and the COVID-19 is a self-preservation mechanism. You know, should we look at the problem like that? And then the other thing is, you know, we, so we cause the extinction of many species and who's going to pay for that? And is it only the future generations who pay or there has to be some redistribution? So we have to reconsider this monetary system we have at the moment and the mainstream economy. And do we have to reconsider that and question it? So this is the, more or less in a nutshell, the idea of this seminar. So we are going to touch on controversial questions and we would like to hear your opinion. We were thinking that, you know, many of you connecting from different parts of the world and some of you are maybe, you are maybe into biodiversity, some of you are not. So we were thinking to do a little poll and just to get a quick idea of, you know, how do you feel and where do you stand? So please, it's a one choice poll. So please go, it's just the 10 questions to in the first round. Just please go very quickly and don't don't Google, please. Just don't Google it. Just hit it, hit like what you feel like, I think this is the answer because we would just like to, we don't, it's not a test, you know, it's anonymous and we're not going to take it against you or anything. It's just, we would like to see what is the state of your, of your knowledge, you know, just in general knowledge. So please just go very quickly and do the, do the 10 answers. And then we will, we will show you the results of these answers after it we finished with the first talk. Very happy to have with us one more time. Kenneth Reisdijk from the University of Amsterdam, the Biomak Group. He's an expert for biodiversity and also a bit celebrity person. So please can take over, give us your webinar. Well, thank you very much, Tom and Valentina Del Conta for organizing this very interesting symposium on biodiversity depth. And I think the questions already show, well, make you realize that major things are going on with our planet and the nature. So it's a great pleasure to present my perspective on biodiversity and extinctions and the lessons we can learn from the last 100,000 years. And you see on my starting slide, you see actually the sea level curve for the last 5 million years. And you see that 5 million years ago sea levels were not fluctuating so much, but they were about 30 meters higher as today. And then about, say, 3 million years ago they start to fall and they start to fluctuate and the amplitude of fluctuations increase. And as you can see in the final moment at 00 million years ago, the fluctuations are most extreme. And we are living at the moment at 00 in the very top of this diagram where sea levels are now from a human perspective, zero meters on average. But they were 130 meters lower only 18,000 years ago. So what can we learn from all this? And on the other, you see also a nice bronze dodo, the extinct dodo. It's a gigantic pigeon that existed on the island of Mauritius. I will talk about this later. So I'm from the University of Amsterdam, the Institute for Biodiversity and Ecosystem Dynamics. And I have a background in Earth science and ecology. I'm specialized in island biography and I'm acting now as a lecturer for Bachelor of Future Planet Studies and Future Planet Ecosystem Science Master. And my research is really centering on how fast does the landscape change and can the nature keep up with this change? And now landscape changes because of geological and ecological processes, mountains form, mountains erode, ecosystems develop and while they develop soils are formed. And this all occurs at certain rates. And nature, of course, is used to all these rates of change in the landscape, natural changes. But humans also change a lot. And this is what I'm very interested to compare. How does the rates of change, we humans in Jews and our landscape, how does that compare to the natural rates of change? And can we then relate this to biodiversity and perhaps even to extinctions? And to start my presentation, I would like to take you back to 2005 when I was very coincidentally involved in a discovery of a very rich fossil mass grave, a bone layer on the island of Mauritius. Mauritius lies about 500 kilometers east of Madagascar and it's central in the Indian Ocean. It's a beautiful island. The background of the picture of my Zoom picture is actually also from the island. And we did an amazing discovery there. We found a very rich natural mass grave. And because it contained also Dodo bones, you have to imagine almost two football fields full of Dodo bones and other bones from the giant tortoise and other animals. It became world news. And this was the Dodo effect because many people know the Dodo from Alice in Wonderland and from other popular publications. And so we got a lot of attention for this discovery. But of course, the Dodo was just a bird like any other bird, a unique bird that lived on Mauritius. So just to share with you some information on this still very beautiful island, Mauritius, it's not very big. It's 1850 square kilometers. There live about one million people. And the island is visited by about one million tourists, mainly from Europe and from Australia. And yeah, and the tourism dependency is very large of this island. I just found this plot of the Mauritian governmental website. And you can see the tourist amount of tourists are fluctuating over time. So on the X-axis you see 2080, 2020. And on the I-axis you see one million and 1.5 million. And it's fluctuating between this with the kind of rhythm probably related to when we have holidays in Europe or in Australia. But you see also this enormous plummet. It falls down when the COVID pandemic begins. And this of course was a major disaster for Mauritius and all the people who lived there who depend, their livelihood quality of life depends very much on tourism. And it actually illustrates also the dependence of island states and islands on external income and imports. And this is just one illustration. So I just want to take you back because Mauritius has a history that is part of the Dutch history, the Netherlands. So you see on this map, you see the Netherlands and you see this red indented line and it shows the way or ships in the 17th century sailed to the Far East, to Indonesia. And it was a very long tour, a very dangerous tour with a lot of piracies and all these countries in Europe had wars with each other. So it was a very risky tour. But in 1598 the Dutch sailors discovered an island and they called Mauritius. And this island was not inhabited. It was not inhabited by humans. It was one of the last islands on the planet that was actually colonized by humans. And in this exceptional state we Dutch people, we were the first people that started to live on this island. And yeah, in our trade history we marketed, we sold this island to potential investors to pay for these dangerous trips, to buy ships and to invest in these enterprises to sail all the way to Indonesia, for instance, to obtain very precious spices. And in order to reduce the risks, the island of Mauritius was a very important station because here the ships could replenish fresh water and fresh food using the natural resources on the island. So there was a lot of Mauritius. The discovery was heralded at that time in many publications. And at that time Mauritius was of course completely forested. So the picture you see here is of the Mayborg Bay in the southeast of the country. And you see a lot of patches that are now deforested. They are mainly sugarcane plantations. But this is the bay where the first Dutch ships settled and where the Dutch colony, the fort was actually raised. And yeah, we replenished here water and catch caught the giant tortoises and other animals that were alive there. And they formed a very important food source on our way back home or all the way to Indonesia. And in 2005 I came, I was invited by a friend to visit this island. And the question was, Kenneth, can you and your colleague reconstruct how the situation looked like when the Dutch settled there in 1640? How did the landscapes look like and what kind of vegetation was there grown? So I went across the island and did a field research with a colleague of mine, an expert in fossil pollen, and we collected samples from various marshes on the island of Mauritius. And one side, we actually, by sheer coincidence, we found it was very rich in fossil bone material. And we started to dig there. We asked whether we could use a digging machine and the landowners allowed us to use a digging machine. And then we collected this sample after some digging. And this is a scoop, which is about one meter wide, and it's completely filled with bones. And if you zoom in, you see tortoise shells lying. And if you zoom in, you see here a food bone of the dodo. And the dodo is this iconic species of extinction, which features in this book, Alice in Wonderland, that was published in the 19th century and was a bestseller by then. And that is why this particular bird has become so famous. And so as I told you already, we got a lot of attention by the media, and that allowed us to collect money to pay for several expeditions to Mauritius. And we investigated the site over several years. Seven years we went there to collect samples and to dig. And then we spent more years to analyze the data and to try to find out how this musk grave was formed. Why did all these animals die there on this spot? And you have to imagine it's two food bull fields full of fossils. And we did this research in collaboration, of course, with the Mauritian partners, the University of Mauritius, the Mauritius Wildlife Fund, and Mauritius Natural Heritage Fund. And we discovered all kinds of animals that are present, are extinct. About half of the animals we discovered, like the giant parrot, the lophoceticus, or indeed the dodo, you see here this bronze artist maker, Nick Bibby, who is actually making a mold for a beautiful bronze dodo, but also a giant tortoise with a very long neck and some other animals that are extinct now. So of all the animals we found, half of them were extinct. And to me, that was an eye-opener. I was really surprised. And then, of course, extinction is part of life. So we know, we all know that 67 million years ago, the planet was hit by a meteorite, and that was the end of the dinosaur. And many other species as well came to an end as a result of this impact. And the enormous change of our biosphere that completely changed us as a result of this enormous impact and led to a nuclear winter, which lasted for at least a year. And so many animals became extinct. And this, of course, happened much more in the history of the planet. And we call this extinction the end of the dinosaurs. We denote it as the fifth extinction, like Tom already pointed out. And the where before the fifth, there were other extinctions earlier in time. And the big ones we call, we give them numbers, and the smaller ones are just also occurring, but they are not mentioned as being the fourth or the third or the second extinctions. But locally, extinctions are part of life. So it was not so surprising that when you find a fossil, a muskite, that you find extinct animals. I would be surprised if they weren't extinct. But what was surprising is that most of the species we found just had become extinct since the Dutch started to live on the island since 6050. So that is very recent. And you have to imagine that all these animals lived on this island for millions of years. So to me, this was a mystery. How did this happen? So, in order to understand that extinctions occur disproportionately more on islands, you have to understand that islands actually, that evolution on islands causes specific species to form. So for instance, the dodo is unique to Mauritius. There may be thousands and thousands of islands, but only one island had the dodo. So if we do something on the dodo on Mauritius, if it becomes extinct on Mauritius, it's extinct globally. So that's why the extinctions humans are aware of. Most of the extinctions are known from islands, whereas extinctions in inland are harder to detect. And together with Sitsu, who will present after me after this presentation, we then looked into the history of Mauritius, trying to understand whether the land use change played a role in causing also extinctions. And as you can see in this graph here, you can see that over time, the island of Mauritius became deforested. You see native vegetation cover on the eye axis, and you see that it is declining. The green line is dipping downwards towards almost zero in 2000. Whereas on the other hand, you see the increase of humans, and you see that humans increase over time, not like an exponentially regular process, but rather irregular, depending on local historic events. But even though when humans were not increasing in numbers, the deforestation continued. So with the loss of forest, you can imagine that the probability of going extinct increases. And you can, of course, imagine the forest is the house of many animals, and when we deforest, when we remove the forest, there is no house for animals anymore, so they start to become extinct, or they become more vulnerable for other pressures, like for instance, the introduction of predators. So oceanic islands generally don't have predators on these islands. But when humans come there, either by coincidence, they bring animals on their ships, they are not aware of, the humans are not aware of, that they bring rats, for instance, on their ships, and that these rats escape on an island like Mauritius, and then as there are on the island like Mauritius, no predators, the rats can grow in populations and they can swarm the whole island, and they can, for instance, eat the eggs of the dodo. And we think this happened. And besides, rats, cats were introduced, pigs were introduced, goats were introduced, deer were introduced, all kinds of animals were introduced, and us there were no predators that would hunt these animals down and kept the population of these animals in check. These animals threatened the local animals, and the local animals became progressively extinct. So this is what we had found fossil evidence of local extinctions. So when you calculate the mean deforestation rate on the island of Mauritius, it was about five square kilometers per year. So that's, but is that fast? Can nature keep up with these deforestation rates? Does that lead to extinctions? We actually don't know exactly, because we don't know exactly how fast landscapes change over time. But what we do suspect is that the rates of humans induced changes exceed ecological recovery times. So when we deforest an area, the nutrient-rich soil on which the whole ecosystem thrives becomes lost. It erodes away only within weeks after deforestation. These soils are being eroded away. And soils take about a thousand years to form a meter of soil. So humans can remove them within a century or even faster. And to recover these soils, it takes on average on the planet, it takes about a thousand years to recover one meter of fertile soil. And it takes, after you have formed a soil on an island, which also takes centuries, to form an ecosystem. So you can imagine that on an island like Mauritius, a rich ecosystem evolved over thousands and thousands of years. And here you see actually a trophic network, which is actually the web of interactions that exist between all these species. And these interactions define the resilience of an ecosystem. The more interactions you have, the more resistant that ecosystem becomes against perturbations. So if you have a rich ecosystem, it's perfect adapted to deal with extreme changes, for instance by extreme climate changes or by extreme effects of land changes. But now we have some idea about artificial rates of land change. We are also very curious what are the most extreme rates of land change, removal of forest in nature, and how and did those natural changes cause also extinctions. So nature can regenerate, but can nature keep up with the human induced landscape changes. And one of the fastest and large scale natural changes we know of is sea level rise changes. So when the sea level was very low during the ice age, and it started to rise under 30 meters in total, you can imagine that a lot of tropical forest in the coast became flooded and was lost. And you can imagine that many islands became smaller. Did that change extinctions? And we know that the fastest rates of sea level rise were about 40 meters in 1000 years. So these are extreme high rates when at the end of the ice sheets, the ice sheets melted and then sea level started to rise on very high places. So let's have a look at how sea levels changed over time. And again, we can see how at the last few million years of our history, of the history of the planet sea levels, escalated towards an extreme degree. And we are within the red circle. So we are very coincidentally at the very top of this sea level high position. And you can immediately see that the average sea level is just around, say for the last million years is around minus 65 meters lower below present. And as the sea level rise, landscapes become, islands become smaller and some islands fragment and land bridges are formed. So how did the local species that lived on these islands? How did they deal with these changes? So in order to investigate this, together also we started this process. We made sea level rise models and we started this process with Tom Hengel years ago. And then with Siege and with other colleagues, we finally came to a model that is able to identify the rates of sea level change at the end of the last ice age. And this is a complex geophysical model, whereby we use various geophysical parameters to make an accurate prediction about the heights of sea levels at certain time steps. And here you see, for instance, for Greece, you see how the area changed over the last 18,000 years. When the sea levels were low, you see that the land was much more extensive. And when the sea level started to rise, all the land shrunk and some islands disappeared and some islands fragmented. And from our work in, for instance, in Greece, we noted that yes, extinctions must have been probable on these islands, but as a result of sea level rise and land loss. But these were local extinctions. So these were extinctions that on a few islands, a certain species was lost, but on other islands, the species survived. So this teaches us that actually the effect of sea level rise and natural extinctions, global extinctions, is we didn't find evidence for that so far. And if we look at what happened at certain islands, for instance, at Mauritius, we see that when the sea levels rose 120 meters, we see that the island contracted 10%. But the dodo, it didn't affect the dodo. And we also have another island nearby Mauritius that even contracted 90%. But it didn't affect any of the living species on that island. So even, but also we can see in the red letters below the slide, you can see that the land loss on the road request, the small island east of Mauritius was lower still than the deforestation rate on Mauritius we had identified from our earlier studies. So we could probably conclude that especially at the end of the few million years of sea level change, animals, including the dodo, got used to sea level change. So when, as you can see, this is happening so often during the past million years that the biota probably adapted and dealt with it. And the rates of change were probably not so high that it caused extinctions. So in what we found on Mauritius, we found on other islands as well, especially Seats' latest research also shows how in spite of very different histories after human habit, habit colonization, deforestation occurred. And we can infer that as a result of fast deforestation, similar extinctions may have happened on these islands all over the world. So from our model, we can now, for instance, calculate how much land was lost in the tropical zone. And we deduced that over the period of sea level rise, about one billion hectares were lost in the tropical zone over 18,000 years. But over 40 years, the last 40 years, we deforested the same area. It's hard to believe. I find that if I say this, I would like to ask you, please check, perhaps I make a major mistake here, because it's so hard to believe that we changed so fast in such a short time, the face of the planet. And this is all the effect that we are now living in what we call the Anthropocene, or the Great Acceleration. The time that we as humans, as Tom just showed, we started to grow exponentially. But that's not really the problem. There are not too much humans, but we want too much. So as a result of what we want at the moment, what we want from our planet, we exploit our planet, we deforest our planet. And if you look into it, this is all driven by an economy that is based on monetary growth. The more money grows, the more money there is available for mining and for many other activities. And many people think the Earth can handle what we humans do. But the Earth might be huge, but our biosphere is in fact very thin. It's a very thin film of only 40 kilometers thick. And we can definitely change it. So we should be really very careful with our biosphere. And we should really monitor what we are doing at the moment with our biosphere, because now animals are becoming extinct, but these extinctions are in fact a symptom of biosphere that is becoming degraded. And it's not bad for the planet, but it's bad for us as humans, because it's the sphere where we live in. So to conclude, in 2020, we produced already much more mass of built materials than the mass, the total weight of the biosphere of anything that lives on the planet. And this only happened in the last 100 years. We built already as much as the total biomass weight. So if we continue doing this, there won't be any biosphere for us to live anymore in a good life. And this is not happening in 100 years. This is happening now. So we really should monitor what we are doing, and we should sustain, deal with, or natural resources in the most sustainable way as possible. And in this paper, we argued that we also should monitor the effects of mining. So in the end, I would advocate that we need an economy based on conserving our life sphere on biosphere, and that we also cherish what we have. That's the place where we live in, and where we can together increase the quality of all livelihoods. Let's do this. Thank you for your attention, and I'm happy to receive any questions. Thank you. Thank you, Ken. And as you see, the Ken already had answered some of the questions we gave you. So if you follow carefully, you will, you could find a think about 50, 60% of answers. And also, Ken, he picked up this example with the mining, and he picked up Croatia on purpose because I'm original from Croatia. He wants to put the blame on me. And the same way, we have this buy versus debt. And so I think we should give the bill also to buy diversity experts on the end of the webinar. But thank you so much, Ken. Valer, are there any questions that people, if you want to ask something, Ken, or if you have a comment, just put it in the chat. And then we will forward to Ken. So people that guessed the Croatia in the poll also, they should be happy. Blame Croatia for everything. Yeah, well, I think it's easy to blame. We are all humans, and we all fulfill a little task on this planet. So we make our own quality of the biosphere. And I believe that if we change our mindsets, not even radically, but if a billion people change mindsets in a focus on sustainable actions, then together we can do it. And we can, whatever we do, whatever jobs we have, we can actually improve the way we do and we handle our planet. So much of what we know now is just very recent knowledge. It's only, Tom, it's your core business to monitor what's happening on the planet. And all the techniques you evolved, they couldn't be evolved 10 or 20 years ago, right? So there's a question like, you know Mauritius, you said they would touch sailors and people started populating the island. Yes. So somebody would think, you know, that like with some species, I mean, we just ate all the species, basically, we just hunt them down and ate them. But in Mauritius, and actually that's the tragedy is that most of the species that we make extinct that we didn't make them directly extinct, but we unconsciously, you know, we not, you know, like in Christianity, say, God, forgive them because they don't know what they're doing. So we did it completely unconsciously. So that's probably 99% of the species that we made extinct is really unconsciously. And just because we are unaware and basically we didn't know what we're doing. Right. And I can add to that. So to give you an example, most people think that the dodo has been eaten to extinction, but it hasn't. The people who went to Mauritius, the Dutch, they didn't eat the dodo to extinction. They couldn't. The island was too big and there were too few people. But what we did do as humans, we brought in animals that started to put pressure on the local ecosystem that started to put pressure on the dodo. So it's not so much hunting that does it, but it's a multitude of factors, deforestation, hunting, diseases we bring, and animals we bring from other places to other, to islands like this, and all together they work and cause extinctions. Okay. I have a question. So we have a question now. It's related. It's the same context. So now let's say now we know. We know now. Now we're a bit smarter than 50 years ago, 100 years ago. And so what do we do now? Which are the good examples? Which are the islands where you will say like, this is really good system. This island should be really proud of. Well, these are the good examples. Let me start also with Mauritius. Mauritius is one of the global leaders in restoring ecosystems. So they have set aside two islands at least that are nature reserves. One island tourists are allowed to come to see how the nature look like that is completely restored. And the other island is like an ark. Nobody is allowed to come there. Only scientists and nature conservation people to make sure that the process of nature restoration is not being interfered. So you know, we, so that's a beautiful example of opening up some areas and allowing some areas to restore to their original conditions. So Mauritius is a leading example there, just to say also something very positive and highlight something positive about Mauritius. But I think my research is now also etching on economy because I start to realize the last say year, I start to realize that the way we have organized economy, which is a global North invention, the way we trade and put money in front of everything is also a very big accelerator of the exploitation of our planet. It's a system in positive feedback. So one thing we should do altogether is to take a different perspective on an economy and design an economy that is actually in harmony with the fragile biosphere. And when I say fragile biosphere, I just mean the layer that is nice for humans to live in, because if the biosphere changes and other animals will benefit from it or plant or whatever. Okay. And with this, we have to stop now. I'm very sorry. We have to stop. We go to the next presenter so you can stop sharing the screen. And as I said, we have the second talk, it was by Sitzel Nordler. And I know Sitzel personally, I was actually his course supervisor during his master degree at the University of Amsterdam. And he did a PhD on island biodiversity in the end of 2020. And he got cum laude for his PhD. And now he's a postdoc at the University of Leiden. It's a super interesting topic. He looks at really super multidisciplinary research as the illustration shows on his first slide. It's a cross between a bit of anthropology and biology and geography. And with this, I would like to pass it on to Sitzel. Please, Sitzel, keep it 25 minutes and keep some time for questions. The floor is yours. Hello, everybody, very nice to see you. Thanks for putting your camera on. And I don't have the idea that I'm talking to a screen only. And thanks also to the OpenGL for inviting me. So I will talk today about the global hotspots of biological and cultural diversity and explore some of the parallels between biogeography and geolinguistics. Well, at first hand, you might think, wow, this is a really weird combination. So let me briefly say something about my own background. So yeah, so I did my PhD in Lisbon at the University of Lisbon about island biography and the Anthropocene and Paternary. So the basic idea is how do rates of change induced by human activities relate to long-term natural dynamics on islands? So it was really grounded in biogeography. And since about, well, one and a half years, I'm at the University of Leiden. I'm doing my postdoc there. And I'm trying to understand the environmental and societal drivers of cultural and linguistic diversity. So basically what I'm doing is applying the methods and theories from biogeography to understanding the distribution of cultural diversity globally. So and for example, I've developed an R package called Dottospace to facilitate the analysis of linguistic and cultural data. This really work in progress. I thought, well, this opens you up. There will probably some nerdy people. So please, if you're too enthusiastic, welcome to join in there. So yeah, we live on an incredibly diverse planet. We, the planet Earth house, a fascinating diversity, both in terms of living plant and animal species as well as in terms of cultures. And as far as we know, it's the most diverse planet in our solar system. Wow, what a surprise. But if we zoom in to our planet, we also see that this diversity is very unequally distributed across the globe. This is true for both biodiversity as well as cultural diversity. However, we also see some general patterns and one of them is the latitudinal diversity gradient, which shows that diversity, both in terms of plant and animal species as well as in terms of cultures, is generally higher closer to the equator. And actually we see that if we look at the association between biodiversity, for example, in terms of mammals and birds, this is strongly associated with linguistic diversity. So during my talk, I will continuously zoom in and out between the global scale and the local scale. And when I talk about local scale, I'm mainly focusing on islands because this is where my experience is. And also because they are really relevant in terms of biocultural diversity. So yeah, so what about these islands? Can I briefly touch upon this? So islands, 7% of Earth's land area, but 20% of global biodiversity and over a quarter of the world's languages. So but then if you look at this image, you might be a bit confused. So there on the left for you, on the top map, you see the island flora. And you see mainly blue circles, light blue, dark blue. And then at the bottom, you see the mainland flora. So the number of plants on the mainland. And you see mainly the reddish and orange colors. So basically what you see is that species, racists, in this case for plants, is generally much higher on the continents than it is on islands. However, what this doesn't show is the relative contribution of these species that are on these islands because those species on these islands are generally island endemics. So they are restricted to a particular island or archipelago. So those species that we find there, we will find nowhere else on the planet. So islands contribute this proportionate to global biodiversity. So and also the number of species per unit area is much higher on islands. And this is the same, it's true for languages, as you can see in the bottom right, these bars. So a very brief intermesso about island biogaphy. I could talk about this for days, but this is just in one minute. So if you want to summarize it, there was this 1963 paper by MacArthur Wilson in this book in 1967. So the basic idea is if you have a larger island, it can house a larger number of species compared to smaller islands, keeping all other conditions the same. And also, islands that are closer to the continent can house more species than islands that are located further away. So but of course, as Kenneth already showed, island area changes and the same is true for the continent. So only 20,000 years ago during the last days of maximum, we could walk from island to the Netherlands from New Guinea all the way down to Australia to Tasmania. So the world looked really different. And actually, we see strong legacies of these past conditions. For example, if we look at endemic island species, for in this case, snails and flowering plants, we see that the number of single island endemic species is much larger than we would expect based on current area alone. So and if we consider the past area of these islands, then we are better in explaining the richness on these islands. And then if we look at cultural diversity, we also see just like with biodiversity that that island area, so environmental settings of environmental characteristics of an island and in this case, area really explains patterns of cultural diversity. So larger islands also house a larger number of languages. So the bottom line is, if we talk about biodiversity or cultural diversity, environmental conditions matter a lot. But now, Kenneth already talked about it. We have entered the Anthropocene. We know these hockey stickers. So human activities are drastically influencing biodiversity patterns and also cultural diversity patterns. So if we want to compare what is happening now, how does this relate to this long term natural dynamics? And for example, here again, Mauritius, Kenneth already showed, okay, in the light gray areas, you see the area reduction in say 20,000 years from light gray to to the dark gray shades. But then in only 400 years of human settlements, the entire island has been deforested. So really the rates of change drastically increased. So here's one study I was involved in, we compared 30 islands, nearly 30 islands around the globe, and we found a consistent pattern that the rates of turnover of vegetation turnover, so the rates at which vegetation across all these islands change following human arrival accelerates with a mean factor of 11. So this is really drastic. And this correlated really well with the moment of human arrival. And it could not be explained by these natural conditions. So it's really, it's not only the magnitude of change, but also really the rate of change. So then island extinctions. So islands, depending on the source between 60 and 75% of all species that went extinct worldwide were island species. And that's really remarkable if you think they cover only 7% of Earth's land surface. And for some taxa like birds, it's even more drastic. So 95% of all birds worldwide that went extinct were island birds. So and also those species that are now most endangered, 50% of them are island species. So it would be good to focus our attention on islands. So that's why we wrote a scientist warning paper. You might have heard about these kinds of papers. So to really put this under attention of the wider public. So and then for some, if you look at the global top 10, top 15 of the proportion of threatened or extinct species, nearly all of them are island nations. So for mammals, there's only one mainland nation and for birds, all the top 15 were our island nations. So what are the drivers for this tremendous loss of global biodiversity? Well, basically there are three main threats for terrestrial biodiversity. And as you see for marine diversity, the proportions are a bit different where exploitation is much more important. For terrestrial species across taxa habitat degradation is by far the most important drive for habitat loss followed by over exploitation and also introduction of invasive species and disease. So remember habitat degradation. I will talk more about that in later slides. But then before we go there, we move again to cultural diversity because not only biodiversity is highly threatened, but cultural diversity is severely threatened as well, even more so than birds and mammals. So and again, many of those languages that have disappeared or are disappearing were island languages. 25% of critically endangered languages and even 50% of those that are endangered and 12% of language extinct species were island languages. So for example, just to give one example, this is an image of Boasir. She was the last speaker of Bo, which was indigenous language for the Andaman Islands. So when she passed away in 2010, her language disappeared as well. And this is just one example. There are many examples like this. So when we think about species loss and loss of cultures and languages, these are really much more related than we might think. I will talk about that a bit more in later slides. So I talked about the main drivers of biodiversity loss worldwide. So what are the main drivers of language loss and endangerment? So what we see, for example, as you might expect, so the orange bars, the larger, the population of speakers, the less threatened a language. So the more vital a language is, as you expect. However, if we look at the main drivers of loss, these are things like road density. So how well connected are these cultures to road networks? That's really a main driver of language loss and even also education and years spent in school. So people will shift to, like a majority language, as we see also where I grew up in the Netherlands, more and more, they should shift towards English. And of course, you have this also in the subnational level as well, that schooling is in the national language. So these are two of the main drivers of language endangerment. So Kenneth briefly talked about the six-month extinction. So what about it? So well, there's good news and there's bad news. Let's start with the bad news. So the rate at which we are currently losing species is much, much larger than background rates. So the number of species that we are losing now is much larger compared to the number of species that we lost over the past millions of years. I mean, the rates have drastically increased, the rates. However, if we look at the magnitude of extinctions, we are not at this 75% of species threshold that's often used as a threshold to talk about the mass extinction. So to remind you, the fifth mass extinction was when the dinosaurs were wiped out by this asteroid, so we are not there yet. So why is that good news? Well, that means that there is room if we now drastically slow down rates of extinction, we can avoid this six-month extinction from happening. However, if we do not act now, within a few centuries, we will be at that point again, and then we leave our planet much more impoverished than what it was and how we found it. And actually, if we look at languages, we see a similar thing. So if we don't intervene now, language loss could triple within 40 years and with at least one language per month, which will total to 1500 languages by the end of the centuries, which is 20% of global linguistic diversity. So we have to tackle these together. So how can we hold these rates of extinctions? So I think there are both practical things that we can do, practical interventions at the local scale and at the global scale, and also more cultural paradigm shifts, but also about skills. So first, again, island biogas. I showed already that large islands house more species, can house more species than smaller islands. And it hasn't gone unnoticed that this theory of island biogas sometimes also applies to conservation. So again, if we have larger conservation areas, they can also house a large number of species. So if we look at the global scale, only 15% of the global terrestrial surface is protected. So if we want to conserve species, if we want to help by diversity loss, we have to increase this area. And of course, that's not the only thing. We also need to reconsider how we design our cities, how we shape our agriculture, how we consume. So these, of course, are all related. We cannot only just set apart areas for conservation, we also have to reinvent how we use the land. And to give some examples of conservation success stories at the local scale, for example, here this was a study for around 30 species that were on the brink of extinctions. And they assessed the probability that extinction has been prevented by conservation actions. It was a nice paper in 2020. And also here 65% of those species that they considered were island species. So really, if we want, if we put our efforts, if we are dedicated, and also invest in that both in time and in monetary resources, we can halt species extinction. To give another concrete example of the Abernay trees on Mauritius, they were basically a zombie forest. This forest, this was not reproducing. So all the seeds that fell off the water tree fell just there at the stem. So this forest was living dead. And why was that? It was because the main frugiforest, the largest frugiforest, they had already disappeared, which was the giant tortoise that was endemic to Mauritius. So when these giant tortoises disappeared, also this forest could not regenerate, rejuvenate. So then what people did in a really bold action was reintroducing giant tortoises from other islands to Mauritius to revive this forest. So it's not only conserving what's there, but it's also restoring lost interactions. So it's not only about one species or conserving habitat, but also restoring interactions. And then I promised to talk about shifting paradigm. So here I think this figure is shocking if you think about it. 50% of the global habitable land is used for agriculture. And 80% of that, 80% of all agricultural land we are using to grow feed for livestock or for livestock grazing, 80%. This is dramatic. So just if we would shift our consumption to a more plant-based diet, this would be a really a major transition in terms of biodiversity conservation. And then this image on the right is a really nice visualization by our studios. If you consider the total biomass of all wild animals and the total biomass of all livestock, livestock weighs 15 times more than all wild mammals globally. And I think this shows that we really need to change how we interact with the rest of the living world. So I have been talking about biological diversity and cultural diversity. I switched back and forth between local skill and the global skill. So how are they related? So 25% of the global land service is indigenous peoples lands. And 40% of the rest of the protected areas and ecological intact landscapes coincide with these indigenous peoples lands. So what I want to say with this, so one thing is that indigenous peoples are really stewards for global biodiversity. But it's not only about, so if we lose languages, we are not only losing words, but we lose worldviews, how people look around, how people view the living world. So we need this diversity of voices and ways of viewing the world rather than this juggernaut of global capitalism that is really driving the loss of biodiversity and cultural diversity. So we really need this different voice and these different views. And there's this really nice paper by T.S. Adal, which really goes into that in much more detail. So to summarize, I know this slide is way too full, but so my point is we need to switch constantly back and forth to the global skill and the local skill. We have to consider how do different localities contribute to global biodiversity and how can we put all our efforts together at the global skill also to support local communities, island communities, indigenous communities to act in their role as stewards. And of course, we need to not only look at the local skill, but also in all these interactions between different localities and learn from these different ways of viewing the world and embracing these different viewpoints. So this was really short if you want to learn more. So this is my book. It's only in Dutch now it's being translated into Italian. Hopefully it will be published soon in Italian. And I'm writing a children's book about the same topic as well. And yeah, I would be happy to take any questions or suggestions. This is my Twitter. You can also reach me by email. So yeah, I'm looking forward to your suggestions. Thanks for your attention. Thank you, Sica, that was really super interesting. And as I said, as I announced, it's super interdisciplinary and interdisciplinary. And we have some time for questions just in the meantime. Vali, can you please open the third poll? It's a much shorter one, but we don't want to lose too much time. So please send the third. Yes, thank you. Yeah, this connection between this has a good question. So you look at these correlations so that you just overlay variables and you correlate them. But what's the causality? I mean, how does it really work? How does it really work? How does the language diversity and preservation helps preserve biodiversity? Yeah, that's a really nice question. So I think, well, you can answer it at, well, I was saying, superficial level. Well, and it's not really superficial. So I mean, so one thing, what the data show is that indigenous lands really cover 40% of these natural areas. So that's really one really concrete way to also recognize that these people are the stewards of these areas. So and still in many cases, also indigenous people are still driven of their land, sometimes even for conservation purposes. So I think one thing is to acknowledge that and also to shift perspectives in this regard. And also, well, there was this question in the poll about the value of global biodiversity. So it's also, yeah, we are also not valuing that in economic terms. So if people are, yeah, I mean, so for example, now we had to, for example, if we think about Mauritius. But maybe if I may interrupt you, maybe there is a difference in the culture. Maybe that's really the key because the, you know, many Aboriginal cultures, they educate their children to be more respectful, to take care of their land. They're more like a driven with the happiness, you know, with the happiness about how they harvest the land, how they grow, you know. And now happiness in the West is more driven by income, by safety, by technology. Yeah, but I also think that's maybe a bit too simplistic because there are also, of course, many examples of also indigenous cultures that have driven species to extinction. So it's not necessarily that. It's not that, for example, one part of the world is more, does everything better, that I don't think it's maybe like that. But I think that we need to reconsider, like that there's one global narrative that we should all strive for, which is, is perpetuous growth, economic growth at the cost of everything. So and I think also in that regards, we can of course learn from different ways of viewing the world and also, yeah, that there's a point to say, well, this is enough. And so of course, as you say, okay, why don't we measure, why do we still rank countries based on their gross domestic product and not on their state of well-being? I mean, of course, efforts are being made towards this goal. Yeah. Okay. And tell me something you mentioned this thing that half of the agricultural land is the basically livestock production. Yeah. And it's like quite extensive, right? Mm-hmm. And so do you think that 80% 80% for agriculture are 77% of all agriculture. Yeah. Yeah. But it's also very extensive, you know, but so but then in Netherlands, you have this very super intensive production of livestock, right? But there are also ecological problems in Netherlands. There's too much nitrogen released in soil, water. But first, where do we, I mean, in the Netherlands, what do those cows eat? It's not, I mean, I think like Netherlands is like one of the largest importer of soy globally. Yeah. I mean, you cannot, that's why I say, well, we cannot only go for local solutions, we cannot only look within the boundaries of the Netherlands, but we have to consider also this global flows of trade and. But what I'm trying to ask you, imagine now next 30 years, up to 2050, do you think it's completely unrealistic to think that we can do this, we can still grow the livestock production, even intensive? Yeah. Or we have to really go for the plant-based diet. You know, we just have to change the diet, it will be 10 billion soon. Yeah. Yeah. What do you think? Well, I think, well, of course, we are now only talking about, you know, just as inputs and outputs, but of course, the whole way we treat living beings is respectless, right? So it's not only about inputs and outputs, but it's also that animals are being grown to be killed constantly. And they have a life cycle, like intelligent animals, they have a life cycle of a few months to years. We, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we kill the mill cows because they have no economic use. So this is really a complete, I think, well, if, if you would design a society from scratch, this is not how we would do it. So okay. Yeah. Okay. Vale, we are close to the end. Can you please open the poll results and share the screen? I don't know. We can see, you can just share it to everyone. So this is the, the first one. So you see my screen and let's see what was the, so many of you are actually, let's say, a bit agnostic to biodiversity, but so marginally only. So I think more than 70%. Let's see if you got the biodiversity in darkness, in a scrapped answer is Canada. So most of people, most of people got it wrong. Then the second one is the highest biodiversity index is Brazil. That's the correct. Most of people got, got it right. In the last 50 years, the human activities led to 60% decrease of wildlife. So most of people got right, but some people also clicked on 40%. The case of global warming 102 degrees global sea level will rise. Most of you got it wrong. It's a half meter. So it's not a, it's not a huge growth, but it's half meter. Can I say something about this? Yes. I'm, I'm fully following this for about, I think 20 years. And each time the values are increased. So now the scientists are worried about breaking up of a major glacier in Antarctica, which would again accelerate even the prediction. So it's, it's a fishy. It's the state of the art knowledge at the moment. Probably will be increasing over time. Okay. But I just took the one that's published paper and you know, the groups did the work and they say half meter. Then the lowest you static sea level last hundred, a hundred thousand years, it's a minus 120 meter. You saw it from a Kent's, I think presentation that was the lowest in the last 100,000 years about 120 meter. But most of you got it wrong, so you put minus 30. The average, the average level of sea was in average minus 60 something, but the lowest point was minus 20, about minus 20. The fastest natural global sea level rise recorded so far. Can you want to answer this one? You said four meter, I think per century or something. Yes, that's right. Yeah. So it's a per century. So most of the people got it wrong. So when the ice sheet started to melt just at the end of the ice, a lot of water was released in the oceans. So you know, some places you see you're too pessimistic and some places you see you're too optimistic. Loss of large animals can lead to loss in biomass. And there's a paper published that estimates about 40%. So again, most of you got it wrong. Only 40% got it right. So it's a 40% loss in biomass. Humans are potential threats for extinction of how many species by 2050. It's about one million. That's the estimate. But so here you got it wrong. So you're all the United by diversity people, you got it right. And the last question, what kind of diverse loss most rapidly. And you most of you got it wrong. It's the linguistic diversity. And that's the work of Sice. That's why he's doing his work because it is happening most rapidly. So it's the linguistic diversity is disappearing most rapidly. These are the first value. You wanted to say something. No, I just wanted to express my sadness when talking about what we are losing. So. Okay, let's do the second poll. Share it with me, please. And I will open also in the browser. Is the second one. Estimate the total number of islands larger than 10 square kilometer. It's about 6000. So most of you got it wrong also. Looks like people put 25,000. It looks like, you know, yeah, you will, you were thinking that it's a much bigger number, but it is only the one that the bigger than 10 square kilometers, so 10 pixels of one by one kilometer. The correct one, sorry. What's the right one. The 6000. 6000 is the, so only 5% got it right. What's the percentage of global biodiversity across taxa. So the correct answer is 20%. And most of you looks like a reasonable number one quarter you got it right. But it's not, it's not 5% or 10%. It's 20%. What's the percentage of all, percentage of all recorded extinctions in the past 500 years. Can answer. Was it the 70% I think. I don't know. Yeah, I think it's 70%. Actually sits and put that. So yeah, many people depending on the source. It's estimated between 60 and 75%. But it's of course important to know that these are recorded extinctions. And you can get it also pointed out that you might argue that social easier to recognize the island species went extinct because you know, well, if we haven't found it here well, then then it's gone everywhere because it was endemic. So okay. The mining, the mining area of the world. It's about size of Croatia so three of you got it right. So you're most of you over pessimistic. So it's not size of Germany is the size of creation. Then the highest per capita CO2 emission bill. The highest bill should go to Australia. Oh, no, sorry, Canada, Canada, Canada is the biggest CO2 per capita emission. So Canada is the worst. Why is this? Do you know why this is? Is this because of tar? They use crude oil tar for making? I will have to look at it. But yeah, also I think they drive large cars, inefficient cars. The countries sparse and they're big distances. So I think they have to commute and then they spend more fuel. But also they are producers of oil and things. From the below listed countries, the highest per capita biodiversity bill should probably given to the answer is Australia. Australia, there's a paper published that they say that they are responsible for relatively largest extinction and change in the environment very quickly. And they're not a large population. So there are I think 26 million or 30 million people. So it's Australia when you look per capita. Then country investing the most in the megafauna conservation is Namibia. So it's Namibia actually. So they lead from these countries they lead. But Costa Rica is also good. So those of you that put Costa Rica, you not so much off, but those of you that put and this is only on the megafauna, so it's not a conservation of everything just megafauna. So it's a bit of nuances also in the questions that you have to you have to read it. Total estimated annual spending per capita could cover the conservation whole costs help preserve biodiversity. So how much will it be? So there's no correct answer. This is a trick on three question I put. There's no answer. It's very interesting to see that is distributed kind of equally, but there's no correct answer. I don't know what's the cost. Somebody will have to calculate. I'm thinking it's more towards the bottom. It's always the bottom. It's not it's for sure not 100 euros per year. That's for sure. Whoever put that you are completely over over optimistic. Then we have the country with the highest percentage protected area per capita. It's Germany. The Germany has the highest percentage of officially protected area, but okay, level of protection also that's another discussion point, but they have the highest highest percent. You would expect Canada. Yeah, I would. I was also thinking Canada, but turn out it's Germany. Number of trees European Commission has promised to plant. It's three billion. So it's three billion they promised by 2030. Well they promised they have a big program, but very interesting. They would like to replant three billion plants, forest plants over Europe until 2030. So next 20, 20 plus years. So these are these questions and now the last one is the opinion questions. So I don't hear anything anymore. Let me open in. I open a browser. This is the opinion questions. So is a COVID pandemic and loss by diversity loss of biodiversity the structure correlated. Most of you believe it's yes, increase interaction between humans and wildlife is the main reason behind this pandemic. In fact, there is a work published on that. Can just send me a paper. And so you got this thing. I think you are going in the right direction. So yes, it is highly correlated, most likely. And I'm also glad that you believe in that. And nobody believes that the COVID-19 was a leak from the one laboratory. So nice to see. Then we have how satisfied are you with nature conservation policies in country region. Most of you are seriously disappointed. So I think we are going in a greater to work direction here, which is good, which is actually good thing. The most important reservation variables in your opinion should be maps of natural habitat in darkness levels. So how much do we impact natural areas? So that's more important that net primary productivity. But it comes as a second thing. Also interesting, can agriculture be combined with biodiversity restoration or being economically viable? We spoke about this a lot. And you think that we have to re innovate basically. And some of you don't believe in it. But most of people, 80%, I think you answered yes. So that's nice to see. What could be the best policy to reduce loss of biodiversity, loss of wildlife? We offered some really extreme things, total combat against polluters. Looks like most of you agree in the international collaboration and conventions, and that most of countries should accept the same way in European Union. We now deal with these problems. I think it could be scaled up to the whole world. One mainstream economy in the future, we need a smooth transition to economization of our ecosystem functions. Very interesting. Most of you think that actually the system is, let's say OK, but you just have to account for the natural services and value of the natural land more objectively. And yes, last one, how much of your blue twinker would you be willing to pay for biodiversity to taxes? You are most of you between 5% and 15%. So it's a good chance that that's something that's going to come. But it's nice to see that you are willing, only 6% of you are not willing to pay anything. Can I just say something? Because I just added a link to a very interesting not yet published study that if you want to conserve 44% of the whole world nature, we need to pay about 0.12% of the global gross domestic product, which seems pretty low. And the idea is that the indigenous people who live in this forest in Amazonia, etc., that they are being paid, they get just an income to conserve their areas. It's a beautiful idea. And people are really thinking about how much will it cost? Is it operational? What do we need? Where do we need to conserve? So there are hopeful developments in this direction. Can you one more time just repeat, please, how much to conserve how much? So, 44% of the global terrestrial planet, if I'm right, as you said, it's one of the studies of one of our postdocs who also was involved, James Allen. And then these other economists, they calculate how much would it cost to conserve this and to pay people to conserve this region. And you don't want to build regions and to exclude people. You want to, as Pipp Sitz actually points out, you want the indigenous people to conserve this people, to take care of their own environment, which in the end is beneficial for our whole biosphere. And then it was 0.12% of the gross global domestic project. That sounds like a very low number. It sounds like it's really cheap. But it's okay. That's only 44%. I mean, if you come to like 80%, maybe this number will triple. And which 44%? I mean, maybe if you go for the low lying fruit, but let's say if you go to small tricky areas where you have conflict of interest and really highly populated areas like India or China, maybe then the cost will be higher. But still, it's nice to see. Thank you for sharing that paper. That's super, super interesting. And with this thing, we have to stop, unfortunately, when we went a bit over time. But one more time, super big thanks to Ken and Sitsa, very interesting talks. And we will share the talks that will be published, the video will be published. You can watch it. So anything you missed, you can come back and watch. And also we can share the results of the poll. So you can see also we will share the correct answers. But for that, you need to subscribe to our newsletter. So just go to our website, OpenGO Hub, or maybe value, you can send a link. You subscribe to our newsletter and we will publish in about two weeks. We'll send a newsletter with the results of the poll and also the link to the video so you can again watch the video and read about the results and see where the things went. Interestingly, most of the people answer wrong or they have a bit of the verging opinion. So yes, and with this one more time. Thank you so much, Sitsa and Ken. Good luck with your work.
To help us answer the question: “Can nature keep up with human-induced environmental changes and can ecosystem regeneration be enough to save our Planet?” we invited two biodiversity experts Dr Kenneth Rijsdijk, Island Biogeography researcher for the Institute of Biodiversity and Ecosystem Dynamics at the University of Amsterdam Kenneth and Sietze Norder, Postdoctoral Researcher at the Leiden University Centre for Linguistics.
10.5446/56092 (DOI)
We will be thinking about just a random set of points where there is no symmetry whatsoever as to model our solids that we've been discussing in many aspects this week. And so of course, ZD in RD is an example of this, and this has a lot of translational symmetry, but I'm not going to assume any such thing. So there's only two properties that the sets that I'm considering will have. Let's erase this brilliant picture and put it here. Oh yeah, and I don't know if everybody can read the title, but this is joint work with Chris Bourne who spoke earlier this week. So I'm going to look at L in RD, a discrete subset, and I'm going to assume given a pair of positive reels with preferably one larger than the other, and oftentimes strictly larger than the other. Then we say that so L is an R, R de l'honnezette if the following two properties hold. Then so the first property is that L is little R uniformly discrete, and I will tell you what I mean by that. And that means that for all x in RD, the intersection of L with the bowl of radius small R around x contains the most one point, and 2L is R relatively dense, which means that for all x in RD, the intersection of L with the bowl of radius big R contains at least one element. So it's a discrete set, but the spacing of this set will not become arbitrarily large, and there is a minimum range in which you see different points of the set, and this is all I'm assuming. So this means in particular that if I have such a set, then so L is by no means translation invariant. So if I do translate L over some vector in RD, then I get another such set. These properties will be preserved, but it will not be the same set. These? Yeah. Well, it's finite. How are you checking? Sure, but then I should maybe draw more points. Well, to make the circle smaller. It's easier than making the blackboard bigger. Fair point. Yeah, so this would be, so if this is our point x, then this would be the small circle. Yeah, wait, I don't know what you want to see actually. So you just, what do you want me to illustrate? Yeah, all right. And a bigger one. What you are excluding physically is the possibility of artificially creating a one-dimensional random set inside of two-space. And people play these tricks in a mouth, you know, where you mostly put things along the x-axis and then every once in a while as a courtesy, you throw some things up here. Okay. That will not be at the loneliest that you can see and it won't translate and you're like, well, that's not really, that's a blend of a one-dimensional system and a two-dimensional system. Yeah. So you don't want to be able to leave holes and create patterns other than you've left lots of random voids, but it still has to be ultimately a two-dimensional thing. Right. It comes up a lot in some cases. Sure. I thought so. It wouldn't the lattice, wouldn't the vertex point of every quasi-crystal be an alarmist? Yes, I think so. Yeah. So basically covers everything. Well, yeah. Anything you could reasonably think of. So what goes wrong then? Right. Okay. Yeah. And this is what we don't want. It's more like what you get from some particle random process where there's some sort of repulsion. Yeah. All right. So as I said, because there is no translation invariance, if we want to think about how this set behaves with respect to translations, we simply have to make our space bigger. So we're not going to look at one set, but we're going to look at its translates as well. And for this, we need some kind of topology on the space of sets. And so if I have such a delonet, then I can take the associated sum of Dirac measures. So this is an atomic measure on Rd, which has support exactly this set. And if the d-dimensional Rr-delonet sets are denoted by d drr. So I just make a set of all of these sets. Then this l goes to mu l, gives a map of dd Rr into the measures on Rd. And this is a subset of the dual space of the compactly supported functions. So I get a weak start topology. So since mrd is a subset of the dual, we obtain a topology on d drr. So coming from the weak start topology. And I will only use this in the beginning of this talk just to apologize some spaces. And a fundamental result on the delonet sets is that this set of these Rr-delonet sets is a compact and matrizable space. And now on this space, viewing this as a topological space, the group Rd acts by translation. So X in Rd, Txl is just a set of translates. And this action is nice. It's continuous with respect to this topology. So then what you can do. Maybe you can, but I'm not aware of that. Why would you do that? So I mean we embed this here as a measure. You mean you want to look at the delonet set in some Riemannian manifold on which your group acts. I guess you could set up a theory like that. I don't see why you couldn't do that. But I've never really seen it. So we are asking where will you modify it over there already? You have these axioms so you can just probably replace these balls with the balls in the given metric. I mean I don't really see why anything here would change. You know, theoretically I find that sometimes the a copy of the same comeback model. So you'd rather shrink the metric or grow it, and then you'd have a sequence of finite delonium. It's only in the limit that they'd be finite. I mean, that's off people who study those FRR conditions, it's safe. And it doesn't make much sense on a finite volume. You actually take a sequence of models going out to infinity. But then there is no underlying big ambient manifolds, you say? Right. Yeah. Yeah, you have to study each one in the middle of the body. And then look at limiting properties. Yeah. But yeah, I mean, purely theoretically, I don't see any obstructions to that. And so it has been studied. I've been working on that now. I can safely say it's kind of a hard one. Right. Because let is equal to let whole. Yes, it's obvious. You move on from one to the other. Yeah. Anyway, so using this structure here, we can now define the continuous whole. It is a nice structure. What does it mean? I know. I can handle it. It's all right. Yeah, so let L now be an RR table and asset. Then the whole, which I don't know, let's say the whole of L, is omega L, which is now defined to be the closure of the orbit of this given L in this topology. So this is some big, or not big, actually. It's not. So omega L is a compact space as well. And a very simple example. So if L is ZD, then the whole is just a torus. Yeah. So then omega L is just RD mod ZD, which is also known as TD. But in general, this can be some arbitrary compact space. And so this you should think of as encoding somehow with this disorder or this non-symmetry that the set has. All right. Now, I won't say much about the following. So to L, we can then associate the cross product. C-ster algebra, don't worry about it if you don't know what this is, because I'm not going to work with it. So you enlarge the continuous functions by incorporating the R action. Now, for those people who know a bit of C-ster algebra, this is a locally compact topological group. And this cross product, therefore, requires some analysis to understand. And we would like to have a more discrete object that is in some sense equivalent to this object. So again, also in the C-ster algebra philosophy, what you should think of here is that this big C-ster algebra encodes the quotient of the action of Rd on omega L. This quotient will not be defined as a space, because the action often has dense orbits or something like this. So we cannot really work. So let's say why we cannot work with omega L, mod Rd, in general, because this will have a very bad topology. There's little information in that space. And so that's what you do in non-commutative geometry often. You replace this space by that space. These cross products also have the property that if the action is nice in a precise sense, then these two objects are equivalent. But this object is not always defined. This object is always defined. So that's sort of what's the underlying philosophy here. So this is what is called a continuous cross product. So we would like to have a discrete version of this, mainly because it's technically easier to work with. Yeah, this thing. OK. In the continuous case, that is not so easy. So this is what you could better do with this discrete version. So what you built this algebra by, no, no, no, I'm just, because it illustrates the problem a little bit. So I will not say much. But that algebra there, you take compactly supported functions on Rd with values in this continuous functions here. Now, this is really a locally compact space. So you have no unit, for example, and all such things. And there is analysis involved. And that's why this is not necessarily easy to work with. That was it. So we're going to look at a slightly smaller object called the transversal to find omega 0 to be those omega in the hull of my given set L. So the set L is fixed from now on. We fixed one delon set. We look at its hull. And we do it in such a way that we only look at the translates of L that contain the origin. So this is a severe restriction, because it basically means that you only look at translates of L by elements in L, or the negatives or something. This is a much smaller set. But this is a closed subset of omega L, and since it's also compact. That means that I really need to know that all limit points are really netizens somehow. By disclosure. Yes. You'd have to know that. How do I know that? I wouldn't know that from the top of my head. I mean, this is probably well-known results. The dynamic of the system is very low. Yeah. So which one is small r and big r assures your own net? I mean, you need to see small r. You need to prove this. I mean, that's definitely clear. But you can find this in the literature. But this is a fair question. It needs proof, this statement. It's a fact that this is well-defined. So now back to our simple example, which we want to stay away from. So for L is zd omega 0. It's just zd, and this is just a point. So if my lattice L has enough symmetry, then this omega 0 is really very small. So it's only of interest when there is some aperitivity, so to speak. Let me skip this for the moment. Or say it in the math. I'll skip this. Maybe I'll say it later. And this transversal, so this transversal discretizes in a precise sense, which I will try to convey the action of rd on omega L. And for that, we're going to define the central object of this talk, which I will use a new board. So what we're trying to do is we would like to have an analog of the situation where there is, in the zd case, discrete group acting on this translate. This we're not going to have. So we need to introduce a more general mathematical object, which is called a groupoid. And I will not give you a formal definition of a groupoid, but I will give this example. The idea of group oids is that you have a group in which you cannot compose every element. That's sort of the heuristic definition. You have a set, and certain pairs of elements are composable, others are not. This is the category we're all concerned about. That's a very quick definition, which I didn't want to give. I will draw a picture shortly where you will see that. So I will just give the example. So g will denote this set here. So we take all omegas in omega 0 and x in omega. Now omega in omega 0 just meant that 0 was in omega, and now we also require that x is in omega. We consider these pairs. It's a very simple set. Right, and that's the set. So this is a subset of omega L cross rd, or even omega 0 cross rd. I want to see this inside omega. Doesn't matter. Doesn't matter. The fact that we have this set inclusion here gives me straight away, you see that this set g has a topology. I won't say much about this topology, but it has a topology just coming from this inclusion. And now there is some algebraic structure here. So there are range and source maps, which are just projection maps, basically. So r from g to omega 0, s from g to omega 0. So r takes a pair omega x, and just sends it to omega. s takes omega x and takes it to its minus x translate. So this shows you that you can think of this g as a set of arrows, which I will now draw. So we view omega x as an arrow like this, and you see that point x is here, then that point x will get mapped to 0 under this operation here. Now, because we have arrows, if they have the right range and source, we can put them together. So that's this partial multiplication that I was talking about. Let me write that informally here. So composition of arrows would then look like this. In a picture and in set notation, I will write it like this. This is the formal algebraic operation. And you see here that the source of this element is exactly the range of that element. That's why I'm allowed to compose them. Can I expand x? Remove the restriction on x to get a 5.9-square e dot equivalent? This object will be merited equivalent. I don't understand what you mean exactly. Well, just take for a bymod, you have a second condition. x is in omega. Yes. So you can eliminate that, because it then becomes instead of an algebra or a bymodule. Right. Yeah, yeah, yeah. You remove one of the conditions. You get a space on which this thing acts and that thing acts. And that gives you the. All this the transversal. Yes, exactly. It's exactly a transversal in the sense of group points. And then you can complete that into a bymodule and all that. We're not going to go there here. But happy to talk about that later. That's a beautiful way of looking at things. Yeah, so since let me erase this. Now we can combine these structures to make an algebra out of this, which is the analog of this kind of cross-product algebra, which was the analog of this non-existing quotient. I'm now going to make somehow a quotient of this omega l, this transversal by a discrete object. And because g carries a topology, so g has this locally compact topology. So we can look at the space of complex valued compactly supported functions. So f is continuous with compact support. And this is just a linear object. But the algebraic structure that I described here allows me to define a convolution product on this linear space. And this is really the analog of the convolution product that you would see in a group algebra. It's exactly given by the same formula. So if I have f and g in ccg, then f convolved with the g evaluated at omega x will be given by the following sum. If you think about this briefly, you really see that this is really just a convolution. So this is now just an algebraic object. So linear space. There's no norm here yet that would make it into a c-star algebra. So we need to represent this object somewhere. And for that, we're going to look at the family of Hilbert spaces that we get from these translates here of the given lattice. These are all discrete sets, so we can form little L2 spaces. So we get a family of Hilbert spaces, H omega, which I just defined to be L2, little L2 of omega, with complex coefficients. This is just a little L2 space on a set. And this family is, in fact, if I put them all together, this is what is called a continuous field. If you've seen this, then you might be happy. If you haven't seen this, then just think of I have this space omega 0. Well, at each point, I have a Hilbert space, and somehow the variation is continuous. So we can then look at the space of sections of this field, and that's where this algebra with convolution product will act. So the space of sections of this field, that I'm going to denote by E, and this is E omega 0, H omega, and omega 0, if you will. This is a space of maps, which I'm going to denote by little E, into this assemblage such that E of omega is an element of H omega. So this is really, it's like a vector bundle, but the fibers are Hilbert spaces. And so I look at the sections of this Hilbert bundle. And of course, there are here continuity requirements, which are technical, and I will not bore you with that. But you can now globalize. So on each this H omega, I have an inner product, because it's a Hilbert space. And so I can get an inner product on this space of sections, which is now not complex value, but it will be function valued. So if I have E1 and E2 in this E, then E1, E2 of omega will be the sum of, let's write, X in omega. Is that right? I should just try. Yeah. E1 on the X bar, E2 omega X. And if I restrict this to a given omega, I get the inner product on this L2. But the fact that this field is continuous is reflected in the fact that this inner product here takes values in the continuous functions. So E1, E2 is now an element of the continuous functions on omega naught. And for those of you who know, this is just an example of a Hilbert C star module over a commutative algebra. That's all it is, if you know this language. And now this convolution product here defines in a similar way an action on this space of sections. So convolution product gives an action of CCG on E, so on the sections of this field by just the same formula. If I take F in CCG and E in E, then I let F act on E. And I evaluate it in omega. And also in X, which may be slightly confusing. And you just use the same formula as over here. Now this space of sections carries a nice Banach space structure. And that you can use to get an operator norm on the space of operators on this section. That gives you a norm on this CCG. But you can write it down explicitly. So I'll just finish the sentence here. This gives a norm on CCG, which I will write over there. I started at 10.15, right? Yeah. Yeah. So we're good. I'm not going to go over time. Can you get this one with some sort of GNS construction? You can view it like that. So the formula I'm going to give you now will look like that. So this norm F on CCG, then this operator norm, maybe I should write B of E for the moment. Because it is an operator on this space of sections, which is a Banach space. This will be equal to the soup of omega and omega naught, where you can write pi omega of F, where pi omega of F is, I have not given this. But you can let omega act on each of these fibers already. This action may not always be good. And so the C star norm that you get is the supremum here. And so in that sense, it's a kind of GNS construction, where you view this point as a state, and it's an equivalent description. So we denote the completion of CCG by C star reduced of gene. This object is called the reduced C star algebra of this groupoid. And so what you've used here is just the algebraic structure of the composition of arrows, and the analytic structure basically coming from the locally compact topology on G. And from that, you get this. This is a C star algebra. I have not given you the evolution. But you can also write down an evolution coming from that algebraic structure. I don't want to spend time on that right now. Now in the simple example, where there is a lot of periodicity, if L is ZD, then what you get is just the group C star algebra ZD, if you do this whole construction. Because omega 0 was just a point. The fiber over that point is exactly ZD. If you write down that convolution product, you get the convolution product of this group. And so you get the group C star algebra. So that was 45 minutes of just constructing an object. And now I will spend the last part of this talk to tell you a little bit about how this relates to what we've seen this week. So the idea is what we've seen this week is that these gap Hamiltonians, and the idea here is that a gap Hamiltonian on L will give, so let me first write an element in this reduced algebra here. Sure, yeah, you can get other elements. But the gapness is not needed for that. Feel free to comment more from this part on, because moving away from my own expertise here. And now I run into the problem that the symbol H has been overused. So you can write down such a Hamiltonian, which I will then write with a capital H omega psi of x. We saw this Hamiltonian yesterday. I'm going to write it slightly differently and cryptically. So beta is some positive number. And psi is an element of L2 omega here. And the sigma xy comes from a magnetic contribution. This I also haven't said. You can modify the whole group void construction. Like if you have a magnetic field going through your system, there is a certain 2-cocycle. You can twist this convolution product by it. For those of you who've seen group C-strelgebras, that's all completely analogous. So you can incorporate also this magnetic field information here. So this is just an example. This is just an example, yeah. And Chris told me that for these examples, it's not even always known that it's gapped. Or? It's in general very positive. Yeah. So I'm completely not talking about this kind of problem. I'm going to say, OK, this is an example. If it is gapped, you can do certain things. So if delta in R is a spectral gap, so I'm assuming that exists, then P of E, which we can build in this way. Maybe E is not good either. Yeah. Now mu is also not good. Throw in a letter, yeah, F. Thanks. So this gap Hamiltonian gives me an element H in this group C-strelgebra. I'm not explaining you exactly how that works. And then I take the spectral projection of that element, in this case the negative one, onto that gap. So this is just the characteristic function of this set here. So this is again in C-strelgebra. So really what you need here is that you use the gap to basically say that this function is continuous. The C-strelgebra's have a continuous functional calculus. And that's why you stay in that algebra. So that's where you use that it's gapped. Otherwise you would end up in some annoyement closure, which is not always a good thing. I think it's bounded. This confuses me as well. It has to be very good. The points will have clustering, but have this will be unbound. So it seems stable now, instead of allowing it to be unbound. Is that the group point C-strelgebra or some matrix algebra? That's a good question. This one is in the group points, C-strelgebra. I also think so. But I mean, if there are actually 10 degrees, then this is right. Yeah, so you could couple this to some bundle. You get something in a matrix algebra. That is technically not any harder. And so these non-commutative term numbers that we've seen associated to this H now arise from certain functionals on the k-theory of this algebra with values in the complex numbers. And because k-theory has certain homotopy invariance properties, you will then learn that these term numbers have some rigidity under perturbations. I will indicate very briefly how you get this. So these functionals are constructed from the first function from position operators on this space of sections E here. So it's good that it's still there. Where on the sections E. So for i is 1 to d, because remember, we were sitting in Rd. And E is a section of xi E of omega x. And this would just be multiplication by the coordinate xi. So this x here is capital, because I want to indicate it's an operator. This is just the coordinate, i-th coordinate of this point. Then if Sd is a d-dimensional spinor space, so here we go. Then you can write x, which is now just the sum of these operators xi, gamma i, where gamma i are generators of cliff d, which acts on this Sd. And the only reason you're doing it is the usual reason that these things anti-commute. So if you square this, you get a sum of squares, you get a notion of ellipticity. Now, this all happens not on one Hilbert space, but on this whole field. So now I will just throw in some terms, again, for those of you who know, and also to maybe provoke some thought. So this kind of operator almost literally defines for you an element in KK theory. So this is already enough. So I'm not going to tell you what KK theory is. But so this is an unbounded operator. This directly defines an element x in the KK theory. So of this algebra C star g with coefficients in this C of omega 0. So this was the continuous functions. And again, if, for example, this L was Zd, then this was just a point. You would get here the complex numbers. That's what's known as the k homology of this algebra, which is dual to the K theory that exactly induces such functionals here. So the main machine that we use for this is what is called the Casper of product. This is a coupling, which I will describe like this now. So I have this group here, which has two variables. And this then goes to the K theory of this commutative object here. And this you can identify with the K theory vector bundles over this omega 0 that was talked about earlier in the week. And so then the problem of getting a functional here is now reduced to just looking at certain functionals on this space. So what I'm doing is I'm using these position operators, which are dual to differentiation, to get from this potentially complicated to K group where this Fermi projection lives to the K theory of a commutative object. And then I can go further down. So if I have a map C, then I get functional on my K group. And this is how I can recover these non-commutative turn numbers. So I will just end with a formula for that. So let's do a little bit more space. So just to not run over time. Yeah, I'm fine. I'm trying to be clear. So let me say like this. So there is a 1, 1 correspondence between measures 1 omega L and measures 1 omega 0. I'm being very rough here. But I have this Rd action here on omega L. You could imagine that you have an ergodic measure for that action. There's a corresponding measure here on omega 0, which gives you what is called a trace. And so a measure mu gives the map, maybe I'll write it like this, not to explain how this works. But this is basically a fiber-wise integration idea if you have a vector bundle. Then you can compute mu star composed with this position operator element that I described there. And I evaluated on this projection pf that I had over here. So that pf lives in the k theory of C star g. I let this element x in this mysterious k group act on it. And then I compose with this functional. And then what I get is exactly these turn number formulas. So just end with that. Going to sum over the d-dimensional symmetric group permutations. Of course, you need to work to get this. Now I get these commutators with these coordinate functions. And cd is some constant. So these are just some numbers. And I'm assuming even. There are analogous formulas for the odd case. And it really comes from this k-theoretic structure. So from a formula like this, you see that if you change, because this is a functional on the k theory of this reduced group C star algebra, if you change this projection but you stay in the same k class, this number won't change. Because this depends only on the k theory class. More sums? No, no, no. It's just what the range of this guy. Can you say what the range of this? Can I? Maybe Chris can say that. It's a bit obvious. All right, you want to make that point. Yeah, sure. OK, so if this is actually what's going to say that, yeah. You'll be normalized if you see the same stuff. So let me end with that. So if there is an ergodic measure on omega l, and you take the corresponding measure on omega 0, then you can compute this using point evaluations. So if mu comes from an ergodic measure on omega l, then mu star x pe is actually an integer. All right, I will leave it at that. Questions? Comments?
A Delone set is a uniformly discrete and relatively dense subset of Euclidean space \mathbb{R}^{d}. As such they constitute a mathematical model for a general solid material. By choosing an abstract transversal for the translation action on the orbit space of the Delone set, one obtains an etale groupoid. In the absence of a \mathbb{Z}^d-labelling, the associated groupoid C*-algebra replaces the crossed product algebra as the natural algebra of observables. The K-theory of the groupoid C*-algebra is a natural home for the formulation of the bulk-boundary correspondence for topological insulators as well as a source for numerical invariants of (weak) topological phases. This is joint work with Chris Bourne
10.5446/56095 (DOI)
I like to use. I like to go to workshops like that myself and listen to talks from various perspectives. And so I'll certainly try to address this mixed audience. And I do my best. You will be confused. You will ask questions. Some things will be clarified today, tomorrow. Or, I'm lucky that Michael Levin, this morning, in some sense, already set the stage of the basic thing of what I want to address, even though I maybe I'll do it slightly differently. It's about this idea that if there is some mathematical structure describing anions and their basic properties, in terms of essentially algebraic things only, then every physicist will want to know, if this is really something about condensed matter physics, then I should be able to extract that information out of what I think is a good model for the physics I'm considering. So how do I do that? And so the essence of this talk is about an approach to do that. I mean, I framed it in terms of stability. And I will explain what I mean by that. My point of view is that if this information you extract is of any value, then it should not be too sensitive to the exact details of the Hamiltonian you're using to define these things. Anions of a certain type, there's sort of an epsilon difference in the Hamiltonian. For sure, there are still anions of a certain type, not anything different. And I'll use, like Michael did, a Tory code model as the most simple example where I can sort of illustrate how it works and why the assumptions I'm making are satisfied. In fact, I decided that I will not really present the most general result theorem, because then there is assumptions that I have to formulate. I look a little bit technical. They are clearly satisfied in Tory code model. It's very obvious. And they're satisfied in other models. And they have robustness themselves. But by using sort of mainly that example, I can class over some of the technical details that are involved in that. So there's mainly joint work with Matthew Chai, who was a student as a postdoc in East Lansing. Peter Nijkens was a postdoc in Davis, now a postdoc in Madrid. And Nicholas Schoeman, who was an undergraduate student in Davis, and is now a PhD student in Berkeley. And also, it's some ongoing work with Sven Bachmann, who is here. But we haven't gotten very far. So I will have to report on that another time. But I should acknowledge many useful discussions with Sven. So it's about quasi-particles. So anions for two-dimensional ladder system is sort of another manifestation of quasi-particles. When you have them, it's a great way to start understanding the spectrum and the properties of a model. Then I will talk about the Tory code model a little bit. I can probably go over this quite swiftly. We already did this morning. And to show you what I mean by stability and concretely, what can change and what doesn't change, if you modify the Hamiltonian a little bit, I'll introduce a model. We call it the dynamic Tory code model. It has additional terms in the Hamiltonian. It's nontrivial dynamics for the anions. Some interesting things you see. And there's extraction of the information about the anion types, and their statistics also works for such perturbations. So that's why it's there. Here is the one thing that is different in a way. And there's different ways to approach life and different ways to approach this problem. So I will start from the infinite system and something simplify if I state things in the infinite system. Then from the mathematical point of view, you have to think back and see exactly what that means and what the Hilbert space is and things like that. So I'll try to explain that and then I will formulate the main result, which we call stability of super selection sectors in a class of gap column spin systems. So this is kindergarten stuff. But just to put this in the right frame of mind, these are very simple examples of many body systems that we can understand in terms of quasi particles. The Heisenberg ferromagnet, for instance. I've written it here in frustration free form because when we can, we want to know that property. But it's just the Heisenberg ferromagnet. Spins of magnitude, as say. And long, long time ago. And maybe they weren't the first, I don't know. It's the first reference I know. Paulstein and Primochov observed that the excitations can, above the grand state, which is sort of a fully polarized state with all the spins parallel, can be regarded as a gas of bosons that are weakly interacting. And there's a hardcore sunscreen because the bosons correspond to deviations of this polar polarized direction in your grand state. And so you can sort of create excitations for each spin. But you can change the magnetic moment at each side by a maximum of 2S if S is a total spin in the model. And so the bosons have a hardcore constraint. But that's still a good way to think about the spectrum of the system and the basic dynamics of the low-lying excitations. Another extremely well-known example is the quantum x-y chain. And here is the Hamiltonian. I don't have to explain it to you. And I'm sure you don't need a reminder that Leibschild's and Mada's solved it by mapping it to quasi-free fermions. And then the grand state can be understood as a Fermi C. And the excitations are the standard excitations you find in the quasi-free fermion system. And this is a good thing. And that's what you're looking for. In general, spectrum and the dynamics of a many-body system is a really intractable problem. And the most common way to approach this is to sort of understand it as a system of particles with certain characteristics and hopefully weak interactions. But of course, that's not always guaranteed. And for this picture to be useful, it's implicit. And we've never even thought about maybe checking it. That is, if you perturb the Hamiltonian a little bit, and of course, people have done this, you can add a force spin term or a sigma c sigma z term to this Hamiltonian. It's no longer mapped to quasi-free fermions, but people have proved a variety of really interesting results. Done mathematically or more physical techniques by thinking of adding interactions to those fermions. So that reasoning means that we really sort of like to know that these structures we recognize are robust. They're not very singular things that disappear immediately if you change the model a little bit. So now in two space dimensions, as we saw this morning, we have also in many models particle-like states. So especially if you have a gap ground state, that is what you expect to see. That describes the excitations. But their statistics doesn't have to be fermions or bosons. In general, it can be something more general, anions. And what we've been thinking about is what the mathematical structure is and how you describe them. And you can think about phase transitions. But again, when you stay in one of these phases, then the type of anions that you find shouldn't depend. And the F symbol and their statistics shouldn't change in the small perturbations of the Hamiltonian. And so that's the theorem we wanted to prove. Already in the x-y chain, you see that sometimes the relationship between the quasi-particles and the original variables may be non-local. So in the case of the Heisenberg-Ferro Magnat, it's local. So these are the bosons. This is an operator that sort of expresses the hardcore constraint. If the number operator for the bosons reaches to west, then this kills the operator. So that's the hardcore constraint. Otherwise, the spin-raising operator is just a boson creation operator. And it's local. It happens at one side. For the x-y chain, it's a bit different. The spin-raising operator is sort of proportional to thermal creation operator. But clearly, all the spin operators commute into proper fermions. I need to add this string. So this is the analog of sigma z, twice the fermion number of ruminous minus 1, product over all the sides, say, to the left of x. And so this is a whole string. It's a non-local object. And this complicates things a little bit. It's not too bad. But it certainly shows you that if you want to obtain stability, we're really trying to prove stability of something that has some non-local characteristics. And this requires some dedicated technology. In the case of vanions and in the Tori code model that I will explain in detail, it's a similar kind. These particle-like excitations, they are sitting somewhere, their energy is sitting somewhere. But to create them, you need to apply a non-local operator. I'll explain that. Most of you have seen it. So I hope it sort of refreshes your memory, and then everything else will become clear. And I'm going to just talk about the Tori code model, which is a simplest example. So this belongs to this family of quantum double models that we have introduced in 2003. And it's based on a finite group. It can be arbitrary. And the special case of the Tori code model is the simplest non-trivial group, Z2. Almost everything I say is immediately generalizable to the general abelian G. In fact, most things generalize to non-abelian G, but there are some things that we don't know how to do in the non-abelian case. But if I believe Michael Levin, he told us that all the interesting physics already occurs in the abelian case. So I like to say that too. I'm not sure whether this is a little bit. But in some sense, the issues occur. And so if you can solve it. What is the definition of interesting? Is something I like to do? No, I mean, it's related to physics. So here's the Tori code model. Again, it's essentially the same notation. There are four body interactions on this lattice. Traditionally, the spins are associated with the edges. They're sitting at the edges. And there is a placate operator. There is the product of four sigma z's or sigma 3's. And a star operator that multiplies sigma 1 for each of the spins associated with an edge meeting and a vertex. And I put a 1 here again. So I think Michael had minus AV and minus BF. And maybe there was stars and plaquettes or whatever. I put a 1. So you again see that this is a frustration-free model with non-negative terms. We make it explicit. It can be solved because all these operators commute. It can be simultaneously diagnosed. You can understand what the eigenvalues are. And we understand the entire excitation spectrum. And that's where K-tive sort of showed us, by example, that a whole class of interesting anion models. And there have been other interesting models, like the 11-band models, that do a similar thing. But we definitely have to believe that they can occur in these systems with just simple finite body, finite range of directions. So let's immediately go to the infant lattice. There's other aspects of the model that's sort of interesting to study on finite volume and on surfaces. But let's go to the infant lattice. And look at this result here by Alitsky, Fannes, and Oradecki. They prove that there is a unique state of the infinite system that is a frustration-free ground. It meaning that it gives zero expectation to all these non-negative operators that appear in Hamiltonian. So it has zero energy. There is a unique zero energy state on the infant lattice. And state here is defined as the function that gives you the expectation. And they prove it. They show that these relations are enough to uniquely determine the expectation value of any local operator. You can, using the algebra of these operators, and the fact that these expectations are zero, you can compute all of them. And therefore, and there is an example that you can construct. So there is a unique ground state exactly like that. But how does this reconcile with the fact that on a torus, this represents a Q? So yes, you're absolutely right. So this vacuum state is unique and is gap and everything. But this is something non-trivial about it. And one way to see it is that if you look at a model on a torus or some more general closed surface of gene is G, and you see that the ground state is degenerate. And these degeneracies are related to the elementary excitations that occur in this model, which you can create without creating energy in those particular boundary conditions. This model famously has a ground state. The general state is equal to 4 to the power G, where G is the genus of the surface. So that's one way you see that there's something sensitive to topology of the surface in there. There is indeed topological order in this model. But all these states are locally the same. So if you take the terminal of climate, every expectation converges to exactly the same thing. And that is this one. So let's become a little bit more concrete of what the state is. And I'll show you. You can sort of describe this unique frustration for vacuum state as a gas of loops on Z2. So you can sort of look for configurations that satisfy, say, this condition that the configurations of in the basis where sigma 3 is diagonal. So for instance, if I have all pluses, then of course, the product of the four pluses is plus. And this will be 0. And all these terms will vanish. And then if we act with the star operators, with the sigma 1's that flip spins, they do that four at a time in the star configuration. And if you draw the minus spins then by a red dual bound as I've done here, you will see you always get closed loops of red lines. Well, that's when you flip these four spins. But then if you flip more spins, if I flip the four here, then I will flip this one back and I get a closed rectangle. And I get always got closed loops. And they don't change the other condition because they commute. And so basically, the star operators, by their action, they permute through all configurations that satisfy this rule that the number of minuses is even around each plaquette. And that, in fact, then the equal weight superposition of all such configurations is exactly this unique ground state. I know this was a little quick. But we will talk about the loops for a couple more slides. And then we will not talk about the loops anymore. But you have to sort of think that in the background. OK. I would be happier if there would be super selection sectors. There will be super selection sectors. So here is bigger picture, more loops, random all equal weight. And you take the superposition. So there is a red line crossing the black line of a minus, but not a plus pin. That is the ground state of this model. But then it's not difficult to check then that if you would have a configuration where you have sort of minuses in a string that ends in a plaquette. So that means that there's only one minus around the four edges of this plaquette. So there's an excitation there. There is one term in the energy equal to 2 instead of 0. And then if you apply the star operators, because they commute, they cannot change the parity of the number of minus pins around each plaquette. This condition will be satisfied no matter how many star operators you apply. And that means that if you sum over all possible red line configurations that are all closed loops except for one that ends in exactly this plaquette, this will also be an eigenstate of the Hamiltonian. It will have energy 2. And I can move it around. I can create many excited states like that. And of course, I could do multiple. And that's what I'm saying here. And so now we can sort of prepare to simplify the picture. The difference between this configuration and one where I don't have this half-infinite line, well, is this half-infinite line, which is flipping spins along a string that starts here. It's a nearest neighbor lattice walk in the dual lattice of the square lattice. It starts here and goes off to infinity. And if you think about it for a second, if you apply this to the vacuum state, which has just all equal weight superposition of all closed loops, you will get exactly this eigenstate. So such an operator creates this eigenstate of energy 2 out of the vacuum of energy 0. Now, of course, if you want to be serious, then it's not clear that an infinite product of operators is well-defined. So you start with a finite product, and then you take some limit of some kind. So we'll come back to that in a second. All right, so I already said this. So that's why these are eigenstates. And in essence, all the excited states of the Toddy-Code model in this way. There's one thing I didn't mention, and that is extremely important as well. That is that this model has a duality symmetry. If I replace the ladders by the dual lattice, so the spin state at the edges. But now I draw all the edges perpendicular to the ones that I have in this lattice. I got another square lattice that also has spin sitting at the edges. And if I then look at the configuration of edges corresponding to a star, it's a placette. And then the ones corresponding to a star in the original lattice are now a placette, are now a star. And so if I interchange sigma 1 and sigma 2 as a unitary rotation, and I go from the lattice to the dual lattice, it's actually, I see, I have a symmetry on the model. So that means that everything I was telling you here, which made reference to the basis in which sigma 3 is diagonal, I can also do the basis in where sigma 1 is diagonal. And that gives me the other type of excitations. So now I've done these excitations created out of the vacuum by such an operator that formally has an infinite string, a half infinite string, it ends somewhere in a placette. So these, I have almost the same notation as Michael this morning. So I use epsilon for electric and mu for magnetic instead of E and M, but it's exactly the same thing. Then so what I could have done, I could have worked in the dual of this lattice and done the same thing with sigma 1 and created electric excitations. And I draw these strings here in blue and also let it go to minus infinity. And instead of ending in a placette, it ends in a vertex. So these are all excited states of energy 2 of the model. And in fact, they are all of them, in a sense. And if you want, I mean, they're all mutually orthogonal. They are orthogonal if these are in two different positions. And the red ones are orthogonal to the blue ones. It's very easy to see. Because there's different terms in the Hamiltonian. They have eigenvalue 2 as 0. So there's a permission operator different eigenvalue. So this is a little l2z2 plus little l2z2 worth of excitations of the model. And then, of course, I can ask about a combination of these two types. I can create them out of the vacuum by applying these string operators. But I have a string here that is the red one, the joint from the blue one. Obviously, everything commutes. I can do both. And now I will have a state that has energy 2 for a placket term in the Hamiltonian here. And then there will be another energy 2 for a star term in the Hamiltonian there. So these are called the excited states of energy 4. And there will be a combination of an electric and a magnetic excitation. So that's that fourth type of anion that we saw this morning. And we can call it epsilon u. And how many of those are there? Well, you can choose the position of the placket. And I've chosen here nearest neighbor position for the endpoint of this blue string. And there's four possibilities of that. So that would be little or 2 z 2 times the c4, worth of excitations. Now, they can also separate. And I will come to that in a second. There's many more states that I have also energy for. So I think it's good we spend just another few moments on the historic code model. But now we want to modify it. But because as was observed, the dynamics of some of these toy models is really boring, because all these terms commute. These excitations sit there, and nothing happens to them. No dynamics. But to actually see that they are anions, we would have to move them around and so on. So let's see how we can add terms to the Hamiltonian, modify the Hamiltonian, so that we have non-trivial dynamics for these excitations. And well, I'm going to sort of show you how this can be done. You don't have to remember the details. We try to do this by preserving as many symmetries as possible. And also, we wanted to have a nice invariant subspace. And basically, we are using the M operators, the moving operators that Michael had. But we modified them so because it didn't like the fact that they, at the same time, also create pairs. I want the subspaces that I constructed, say with one electric and one magnetic excitation, want that to be an invariant subspace still of my Hamiltonian, which wouldn't be the case if I just created them with single pyrolimethersis. And here is how to do it. You need to introduce a sign with respect to an orientation of the lattice. You can pick an arbitrary reference orientation. I do horizontal bonds point to the right, vertical bonds point up. And then there is a sign convention of a vertex with respect to an edge. It's plus 1 if it's the starting vertex, minus 1 if it's the ending vertex. And of the edge with respect to a placette, if it's oriented in the counterclockwise direction, it's minus 1. If it's in the clockwise direction, it is plus 1. I'm going to try to remember the formal last. But what I'm doing is I'm adding moving operators to the Hamiltonian for each of the elementary anion types that we have in the Toddy-Code model. So the electron, I think we started by looking maybe at these this morning, right? The magnetic excitations created, but I don't remember which one we looked at. But the party matrices that flip an additional spin, they can extend a string in the dual lattice. Or the sigma 3's will do the same in the original lattice. But I'm multiplying by these vertex and placette operators so that this will only happen at the endpoint of a string that's there and not somewhere create a pair of additional excitations. So that's the role of these operators. Now, it's not immediately obvious they have to do a calculation that this indeed these operators leave the vacuum invariant and they don't do anything where there isn't an excitation of that type already sitting. So where there is one sitting, it makes a nearest neighbor move. I don't know if you could probability all around. And that's the same thing here for these ribbon excitations, the ones that have a neighboring electric and magnetic excitation pair. So they have four body and six body terms, but OK. These are finite sums or infinite sums? These are all finite. Well, so the first sum is the sum of all edges. So there's something happening at all edges. You can jump over an edge or along an edge. So at all nearest neighbor terms in that sense. But all the other sums are finite. It's about the, this is a sum of the vertices belonging to the edge. This is a sum of the faces that contain the edge. Yes, they're finite. But the first one is infinite. Well, yeah, it's a Hamiltonian. OK, I mean, it's formally infinite, just like the original one. So translation, its translation vary. OK. And you can do this with different coupling constants. If you want, if you want to keep the duality, you have to take lambda epsilon and lambda mu the same. OK, so then I chose these terms so that now these spaces of just a single electron, single magnetic, or a ribbon of an electric magnetic pair, they are invariant subspaces. And so I can calculate the dispersion relation of the Hamiltonian in those subspaces. And here they are. It's a simple sign, dispersion relation, for the epsilon's and the mu's. And it's a direct cone for the epsilon mu excitations. It's a nice picture. Is it case of some of the volume of the electron? Yes, of course. Yes, that's what I mean by dispersion relation. It's a quasi-momentum. There's a translation vary model on a lattice. And as a function of quasi-momentum, I can calculate the spectrum if you want. What happened to little something in the middle here? Ask me later. I don't know what you're asking. OK. But I can also add all the terms together and look at this sector with one electric and one magnetic excitation. And then since the electrics can hop separately from the magnetic, so even if you start with a ribbon, it may fall apart. It turns out, actually, if you plot the spectrum, it's kind of interesting. You take this ribbon, say, one, and then as a function of the other coupling, constant lambda, this is the spectrum for some generic total momentum, total quasi-momentum. We have two particles now. So the each half, quasi-momentum, the total momentum is conserved, so I can fix it. And then I have still a relative momentum. And so when the two excitations unbind, you just have a scattering state. That is black part. You can calculate it exactly. But you also have a bound state. So it's like the electric and magnetic, they form an atom. They can have a total momentum, but they stay together. Then, in fact, they generate, and they have the symmetry, and so on. And it's sort of interesting that it depends on the relative sign, what happens. So they are always there. If these coupling constants have the same sign, if they have opposite sign, then there is an inequality. And at a certain point, the bound state disappears. And you can see in the numerics, this bound state becomes bigger and bigger. It dissolves. They fall apart. And you get sort of a view of fusion and unfusion in this model by the dynamics of the Hamiltonian itself. So this is the kind of model. And I didn't design this model based on physical consideration of a particular material. So someone else will have to do that. But there are sort of the simplest terms that have the properties that I mentioned that you would have. So I'm interested in studying models like that. Maybe someone else comes up with another Hamiltonian. And I would like to be able to understand what it means for a particular structure of anions to be present in the excitation spectrum of this model. So first, I want to be a little more precise of what these dispersion relations mean. Well, since I like to work in infinite system, but it's not entirely obviously a priority about the Hilbert space. So I'm going to construct the Hilbert space. But I already prepared you well. I'm going to start from the GNS space of this unique vacuum that the Torre code model has, which is this equal weight superposition of all these closed loops. So you can sort of visualize that, yes? There is one question which is related to one of Brown's question. How do you get the infinite algebra? It's a quasi-alocal algebra. It's the standard quasi-alocal c-star algebra. It's a c-star algebra. Right. Is this Hamiltonian? No, the Hamiltonian is not in there. The finite volume Hamiltonians are there. The dynamics is well-defined on it. And since the dynamics is well-defined on it and the state is invariant under it, this general theorem tells you that there will be a well-defined self-adjoint Hamiltonian that is not negative. There is zero eigenvalue at the bottom that represents the dynamics of the model in this representation. So that's what I'm doing. So that's this ground state. But we were interested in those excited states. They had an anion or two anions. So those we created using a pass or dual path. So there was a path starting in some face of the lattice in a plaquette, or maybe a path starting at a vertex. Let's fix just one to do the example. So what I wanted to do is act with the string operator. But now I have to work in the representation, acting on this vector omega 0, which represents this vacuum state in my new representation. And so I wanted to start with a finite path because that's an observable in the algebra. I represent it. Then maybe I take a limit and to infinity. But you quickly realize that this is not really an interesting thing to do. It doesn't converge as a vector in Hilbert space, or it converges weakly if you want. It converges to 0. So it doesn't give you anything. But the matrix elements of the Hamiltonian converge very well. So first of all, this one you can define on the infinite lattice. These are finite strings. And I can very well take the limit of these strings to infinity, and I will be able to calculate the matrix elements of the Hamiltonian. And now, a couple more things that I learned from this. So why is this well-defined? Well, it is well-defined because if I look at this formula, I act with some operator in my representation on the vacuum. I represent my Hamiltonian some way. I have another vector. I can actually look at this as a representation of a transformed Hamiltonian. These string operators are products of sigma 1's or sigma 3's. These are salve joint unitaries. And then I realize that if I have the same string operator, then on any local term, any term in the Hamiltonian, one of these AV's or BF's, when the strings are outside of the support of the operator I'm considering, they cancel each other. I have sigma 1 squared or I have sigma 3 squared. And so there's a denody. So this limit exists early on every local observable. And it defines an automorphism of the algebra of local observables. And if you want to have two of these strings, say magnetic ones that go to infinity, but they end somewhere in a different point, then one is just a finitely supported unitary operator way of the other. So I can also express this with this automorphism. But now it's kind of clear. So although I have calculated the matrix elements of a Hamiltonian, which would be a bounded operator acting on a subspace, which is little l2 of z2, it's not really a subspace of the Hilbert space I had before because I'm composing with that automorphism. So I'll make this a little bit more explicit. And here I'm just saying you do the same thing with the other types of anyons as well. And all works exactly the same. Because now I can, I had this vacuum state that this gives me the vacuum expectation value for every observable. But if I compose with this automorphism, that is conjugation by a string operator, I actually get a representation of the excited state of energy 2 that has a magnetic excitation sitting in a particular placate. Because this converges nicely, this gives me the new state. And what is the GNS representation of that state? I could sort of have multiple ways to represent it. But the simplest representation here will be to realize that since its composition of an automorphism, I can calculate the expectation values by taking the representation composed with the automorphism, which is just another representation of the same algebra. It's obviously a composition of morphisms. So this will be one you're floating around somewhere. This is a, and here it's located at f. So there's a whole family of these things. But there's only one sitting. There's only one. That's correct. Yeah, there's only one. Zero density, what? Yes, there's only one. So this is a GNS triple of that excited state. And it is in that space that you can embed that little l2 of Z2 that has all the magnetic excitations in different positions. Because if you change the position, that is a inner transfer. That's a unitary operation that is local. These are the, if you want to move more than one step, you have to use more than one sigma one, but it's a finite string. That is inside the algebra. That doesn't change the representation. These are all equivalent representations. And again, similarly, we can do this with the electric excitations and with the combined epsilon mu excitations. We are almost going to make, you'll see, happy, which I very much want to do. So we have four classes of representations that are now done. So this is the vacuum representation we started from. There's just one state in it and sort of an equivalence class of representations, because I can do finite unities. I can also start from one that has energy 2 excitations at the particular phase F. But in fact, they are all equivalent. And that's my second equivalence class. And then I have an electric equivalence class, and I have a combination. And it's not difficult to show that if you take a representation from different sets here, they are not equivalent. They are inequivalent. They are joint representations. Some, these operators that go to infinity, they are not in your algebra, obviously. And they are fundamentally different if you go with sigma 3's over the pass in the lattice, or sigma 1's over the pass in the dual lattice. They're nothing to do with each other. So now we will come to the super selection sectors. But what I would like to do, so this is well understood for the TORI code model. But now what I want to do, if I am interested in a model that has terms like this, and maybe there's some other terms, add it, I would like to extract sort of this kind of structure for the excitations out of the model itself. OK, we already showed you the dispersion relations, and that you can have bound states and scattering states for the combined epsilon mu excitations, depending on the value of the coupling constants. So I'm not going to explain this in detail, but the particle like nature of these activations is well understood, starting from the fact that they're created by these sling operators with Pauli matrices. The epsilon and the mu's mutually, they behave like hardcore bosons. But they have, I mean, among themselves, but they have a non-travel statistics, one respect to the other, and the combination is actually a fermion, the operators that I've defined in my orana fermions. And so I would now expect that in a model like this, I find the same, that the excitation spectrum can be understood as a model where you have two types of hardcore bosons with some mutual statistics, and then another type of my orana fermions that are the fusion of the two. And what we looked at is to sort of recognize this. This is some problem that in axiomatic field theory, people have thought about. Suppose you have at least the axioms, and maybe even examples of your axioms in quantum field theory, and people have asked this question, what is the particle content of my theory? And people have found that in two dimensions, they are not fermions or bosons. There is a general theorem. If you take the right framework, you will prove that in three plus one dimensions, it will be fermions or bosons. But in two dimensions, you can have great statistics, you can have any of them. And people have done this in the relativistic context. And so we are taking inspiration from that exercise, things that happened in the 70s and the 80s, and continuing in the 90s, but adapted to our situation. And of course, we don't have Lorentz invariance. But I like this idea of using automorphisms to create the different types of particles. Because automorphisms have already that structure that people see or posit in these moderate tensor categories. You can multiply particles and get new particles, or combine particles. And of course, you can multiply automorphisms as well. And so at first, maybe you would think I'm going to classify equivalence classes of representations of my system. Maybe that satisfies certain physical conditions. Representations are kind of difficult to work with directly. But if you obtain them by combining a reference representation, composing it with an automorphism or an endomorphism or generally, then you already have the product structure and so on. And things are more tractable. But just like you don't want to study all the representations, some many have infinite energy, they're not physical. Similarly, you don't want to study all the automorphisms. And you need to apply some criteria, those that are likely to correspond to particles. So that is what people call super selection criteria. And I'm going to just mention the two most important properties that we want. One is almost locality in cones. So I'm thinking there's automorphism tau here. And I'm thinking more generally. I'm thinking about this is conjugation with these unitaries that in the toricode model correspond to the string operators. They are not local. So I shouldn't start by assuming something is local. But there is something that goes on non-trivial yet infinity. But it's not all over the place. I can localize it on a string in the toricode model. And generally, I'm going to localize it on a cone. So a cone is just a wedge in my, say, think of a two-dimensional lattice. It's a wedge in the two-dimensional lattice. So this is a direction where I go to infinity. And I allow my automorphism to do something non-trivial in a wedge like that. But in the complement, it should essentially be the identity. And it's not a good idea to require this strictly. So that's what this says, almost locality in cones. It says the following. So these are cones. And I think if I take an operator supported in the complement of a cone and then also shifted away from it, so it's far away from this cone, then the action of the automorphism is really the identity. And it disconverges at least as any power in it. That's sort of what we need technically to do what we want to do. So clearly, this is satisfied for the automorphism in the toricode model. But it will also be satisfied for the automorphism, for instance, in those more general models. So that's one thing. And the other thing is transportability. And I think every physicist will agree if you recognize something as a particle-like excitation in a translation invariable, then it means that I can move it to another spot. Or when I see it somewhere else, I'll recognize it as the same thing. So this is what is meant by transportability with respect to the vacuum state. So you have two cones. And I have one of these automorphism that I'm considering that is localized in there. And there should be one that's localized in the other cone that is unitary equivalent to it. The magnetic excitation can be here, with a string that goes to infinity this direction, but it can also be here. A string that goes in that direction. And I didn't emphasize this maybe I should have. So when I chose my reference strings to go to minus infinity in the vertical direction downward, this was an arbitrary choice. You can show that the state you generate does not depend on how the string goes to infinity, only on the endpoint. All expectors, as you might calculate, they are exactly the same, not independent. But I don't want to consider things that mess around all over the place. They need to at least stay in the cone. OK. So these are adaptations of what people did in the 70s. In algebraic quantum field theory. And Peter Nijken studied the Tauri code model in those terms. And so when he was at postdoc in Davis, we started thinking about this how this would work more generally. All right. So I'm not going to, yes, this picture is just about these cones. But I don't think I will need it. OK. So now how do the super selection sectors come in? And they make contact with the different structure of the ground state space and finite volume if you put it on a torus or something like that. So they are equivalence classes of automorphisms. They are really equivalence classes of representations. And Peter Nijken's calculated these equivalence classes for the Tauri code model and a billion quantum double models in general. And he found these something exactly parallel to what we did with the states that we constructed just a little while ago. And as I said, there is a product structure on them because automorphisms can be composed. And in fact, there's a little bit of work and definitions to do. But this gives you all the properties that you need to have for the braided model C star tensor category, namely in this particular case of Tauri code model, the one based on the representation theory of Z2 cross Z2, the quantum double of Z2. But you can do this more generally. So this is the structure that we can show is stable under perturbations that are not too strong. So now I need to explain a little bit what kind of perturbations that I will consider. We can get the epsilon sector. So what will happen if you apply the total morphism several times? OK. So this is sort of a gauge theory with charges that are these elements of Z2. And so this Z2 is reflected in the structure of the equivalence classes. So you only will have these four equivalence classes. And if you apply the epsilon automorphism n times, only the parity of n will actually matter. Of course, there can be excitations of energy 6, which are not the same as excitations of energy 2. But they can be unitary transformed in each other. Because when you create a pair of anions at finite distance, that's an inner operation. And that creates a unitary equivalent representation. Yeah. Good question. OK. So when I see stability of this structure, I want to now look at quite general perturbations, short range, terms that I add to the Hamiltonian. There are some limits to what you can do. If it's finite range, then it just has to be small enough. So there's going to be a parameter s here in front, so which will be allowed to be non-zero, but not too large. If you have pair interactions of arbitrary range, they should decay exponentially. In this setup, I mean, you have different choices you can make, but there are some conditions of short range that you have to apply. And so that's the condition on the perturbation that you add to the Hamiltonian. And so for instance, if I add the terms that I introduced for the dynamic torque code model, and I choose my lambdas not too large, this will be satisfied. Now, if I want to formulate this more generally, then I have to also assume some properties of the Zemperturb model. But here I just wrote a total code model because it has all these properties. And the truth is, we can only really do this for models that are frustration-free, for which we know there is a gap, and for which the ground state satisfies a property called local topological quantum order, which leads to the spectral stability of the gap by a result of, well, first Bravi, Hastings, and Michalakis, and then Michalakis and Swalak. But since the code model has all these properties, I'm not going to talk about that. So here is the theorem. It sort of summarized a little bit, so it fits on the slide. But you can read the paper. We just, about a month ago, we submitted revised version. Should appear in CMP, hopefully, soon. So there isn't S0. So if I'm a coupling constant, there's less than this value. Then I can, in fact, construct quasi-local automorphisms that have the following properties. They will implement what Mark 11 was showing on the board, if you have a, and maybe his parameter was even called S also. If you change your Hamiltonian, you have an interpolating curve, then you can use that as your definition of what the same phase means. And so these quasi-local automorphisms, they actually transform everything in your theory along paths like that. So let me be a little bit more specific about that. So they are crazy local. So you can think of them as conjugation by unitary. That is, the dynamics, for finite time, some time one, associated with finite, some finite range, or short range interaction. It actually cannot be finite range, but short range interaction, fast decaying interaction. So we sort of did the mathematics of that in the paper with Bachman, Nikolakis, and Bob Sims. Some years ago. So what it will do for you, number one, it will transform that vacuum state, the reference state. If you compose with that automorphism, you will get translation variant, infant volume, ground state of now the perturbed model. And it will have a spectral gap above the ground state. So here I refer to this theorem. So the first result in that direction was by Brabier Hastings and Nikolakis. And then about the super selection sectors. So the super selection sectors that we understood, that Peter Nikon's understood for the Tori code model. So they are equivalence classes of representation. So within each set k1, k2, k3, k4, whatever, no, we started at zero. Or they are all unitary equivalent. So now that I compose these representations with that quasi-local automorphism, I will create another class of equivalent representations. And the ones that are in equivalent will still be in equivalent. Because that's where the quasi-locality of the automorphism comes in is really essential. And then it's not difficult to see that the automorphisms that corresponds to the particles, they are just obtained at perturbation of strength s from the ones that could be a 0 here, if you wanted, for the input model just by conjugation with this automorphism. That's also why we need quasi-locality in cones. Because I'm going to have this string operator essentially acted upon by this quasi-local automorphism. And it will now be not the local operator anymore, not even in the string sense. But it will still be able to show that it's essentially localized in a cone. And then you can actually also go and check that you can define all the elements of the braided tensor category that you wanted. And of course, there will be some. So in some sense, the fact that you find the same anions, that's not the miracle, so to speak. Because the structure that labels them algebraically is the rigid structure. There's a theorem called Occhiano-Rigidity or something like that. So if it's well-defined, it has to be stable. But the point is how to define it. And that's also what we heard Michael Levin talk about. It's not fully satisfying to have an algebraic theory disconnected from any Hamiltonian model or a Conn-Dazmatter system and say, well, this gives you the structure that you will find to understand your excitation spectrum. You want to be able to see that. How it emerges from the theory. And so that's what you can do, at least in the perturbative sense. Yes? Do you have perturbations, translation, and variance? No, necessarily. No. But then why are your ground states translation and variance? I didn't say that. Or there was something translation and variance. You can translate. You can translate. Here, I know what's I'm saying. You can. Omega 0 composed is something most transformation invariant. No. It won't be. In point 2, there's a translation. Oh, no. That's only if the perturbation is done. Yeah. Maybe. So maybe in the previous iteration, I stated this theorem for the perturbations that I put this dynamic to the Cormor. And then it's translation invariant. So if the perturbations translation and variance, there will be translation invariant. But if it's not, it will not be. It doesn't have to be. No, sorry. Yes, I agree. So in general, it's not translation invariant. Absolutely. That's correct. OK. So it's maybe a little bit impressionistic. But there is a theorem with all the conditions there if you go look in the paper. OK, so I hope I explained what I wanted to explain, whether I really made you happy. I don't know, but I will check later. So in all this, exploiting the quasi-locality of the dynamics is really absolutely essential in being able to do that. So one thing that comes in here is leap-lumson bounds for things localized in cones. And I'm not saying we have the optimal formulation of that. And for instance, we couldn't. So we could work with things that aren't cones, but they have to become wider as they go to infinity with the estimates that we have. There's one reason why we define it that way. Sorry, so it works. But that really happens with the string operators, for instance, I don't know. So frustration-freedness is everywhere. I didn't really emphasize it very much. But you only need it for the unperturbed model, so the reference states. So we understand almost everything about things that can be connected to a model with frustration-free state. So this is still a weakness, I think, because it's not really a physical thing. If you weren't convinced before, I think now you should be convinced that anyones really exist. They are a robust thing. Of course, we still want people to find in all kinds of materials and physical situations. I don't know whether it's interesting, maybe. The fact that depending on the total momentum, the epsilon and mu-annuals may bind or not. I think this means there is a phase transition in this model. But that's sort of something of separate interest. I don't know whether there's a physical interest. I haven't talked about bulk edge correspondence, but there is a sort of a natural thing to ask. I've done everything in the bulk that you could certainly try to see what happens with the structure. If you have a half space or something like that. And all that said, so many things can be done for non-abillionaireness. There are some things that we haven't figured out how to do. And I don't know. Anybody knows how to do. When you have non-abillionairenes with non-trivial fusion rules and not just one-dimensional representations, then you probably cannot just work with automorphisms. You cannot just do unitary transformations. And this sort of complicates things. And I don't know exactly what the answer is to that. Maybe it's just technical. But you really have to work with positive operators that it's not entirely clear how the product structure comes out. It's sort of maybe it's technical, or maybe there's something to be learned there that I mean so far don't understand. I don't know. But this is exactly the right audience to ask those questions. All right. Thank you. Thank you. I like these one-particle approximations. And it seems that in your picture, they come out of the many-particle theory. So can one say what the many-particle theory should have so that in the end, you can get a reasonable one-particle approximation?
Kitaev's quantum double models provide a rich class of examples of two-dimensional lattice systems with topological order in the ground states and a spectrum described by anyonic elementary excitations. The infinite volume ground states of the abelian quantum double models come in a number of equivalence classes called superselection sectors. We prove that the superselection structure remains unchanged under uniformly small perturbations of the Hamiltonians. We introduce a Dynamical Toric Code Model and discuss some of its features.
10.5446/56097 (DOI)
I will be talking about two closely related applications of the index theorem you've just heard about from Sven. Namely, as already anticipated, one application is to prove quantization of the Hall conductance in interacting systems. And the other one is to prove Afrum-Danazak relations for such quantum Hall systems, which are relations between the Hall conductance and the filling factor of the state and the strength of the magnetic field that pierces the quantum Hall sample. So to do that, I will very quickly recap the many body index that you've just heard about, so I can go through this fairly quickly because it was only an hour, half an hour ago. So what are we looking at? Since I'm doing quantum Hall, I will just take a two-dimensional discrete torus, which has a linear size L. And on this torus, so on this torus, there lives an interacting quantum system. Perhaps it is a many body spin system where the Hamiltonian is a tensor product of local spin degrees of freedom, or it could also be a fermionic fox space defined on the single particle states on this torus. What we need, of course, in any case, is a charge to transport. So on each side, there is a charge operator Qx, and it has an integer spectrum, simply telling you how many charges are sitting on that side. Very often, we will be interested in the total charge in some region x, and so this is just notation for that. Then I will, so the index theorem is valid for a class of states, but I will restrict myself to the case that these states are gap ground states of local charge-conserving Hamiltonians. So we have a local charge-conserving Hamiltonian H with a gap ground state, which I denote by the projector on the ground state space P, and this ground state space P might possibly be Q-fold degenerate, but then it must have local topological order. So on top of that, of course, the index measures a quantity of charge that is being transported by some process, and this process we call U, and it is modeled by a unitary, and we demand that it leaves this ground state invariant. Moreover, this U must preserve locality in the sense that if you sandwich, if you transform a local operator by this unitary, then it is still more or less local around the place where it originally lived. Yeah, okay, so I will not go into this. Then U must conserve charge locally in particular, so if you take the charge in half of the torus, say I take the top half rather than the sort of the side half that Sven talked about, the reason will become clear soon is because the process that pumps charge for me will be easier to draw pumping charge in that direction rather than that direction. So there is the charge in the half torus, and when you transform it with this process U, sort of everything that happens is that some charge spills over the boundaries of this set, so there is a boundary here on the outside and there is a boundary on the inside. The charge spills over the boundary a bit, and when you take the difference, then you get the spilt charge over one boundary and the spilt charge over the other boundary, and we are interested in only one of these because the expectation value of the sum, as was explained by Sven, is zero, we will be interested in how much charge has been spilt over this boundary, over this one boundary. Okay, so what then is the, right, so we can now look at how much charge has been spilt in the ground state, and the index theorem says that this amount of charge is quantized, in fact it is equal to an integer divided by Q, the ground state degeneracy of our state. So let's apply this to quantization of the whole conductance. So how are we going to do that? We will take a particular process U, and we will build this process U by modeling a Loughlin pump on this torus. So what is a Loughlin pump? We will model this Loughlin pump by threading a flux through one hole of the torus, and I draw this flux tube sort of very close to the surface of the torus going around to indicate that I will insert this flux by having, so sort of morally, if there is a flux through that hole of the torus, it means that if a charged particle moves around the torus once, then it has to pick up a phase, and this phase is equal to the size of the flux. I will make the particle pick up that phase only when it jumps right under this flux tube, so that is also what you'll see it in this setup. The phase is picked up when the particle moves across this one strip. So a bit more concretely, you can take, say, a free quadratic fermion system that hops on the lattice, maybe nearest neighbors, and put the Fermi level in the gap, and then maybe add some onsite interaction just to emphasize that this all works for interacting systems. So how do you modify this Hamiltonian to model the flux threading? You just add the appropriate phase to the hopping terms that cross under this flux tube. I'd like to make an observation, which, remember the first stop by Yasushiro, where he had this local twisting of the bonds. I think there is quite a connection. Ah, possibly. Okay, so this defines a family of Hamiltonians, parameterized by how much flux has been threaded, and you can also easily see if you thread a unit of flux, which in the units I've chosen is just 2 pi, then you get back to the original Hamiltonian. Indeed, then the phase multiplication here is just one. So yeah, we have a family of Hamiltonians, and now we will adiabatically pump this flux, and so we are in the setting of the Tauwels pumps that Sven already talked about, because after one cycle we are back at the start. And so, right, now how do we get the Hall current out of this? Well, physically, if you have a flux that changes in time, of course, you have an electric field that whirls around it, and morally in the model, this electric field will also live near this strip, and there will be a quantum Hall response to this electric field, which is a current going along this strip like this. And remember in the index theorem, what are we measuring? We are measuring that the charge that flows into this upper half torus, which is precisely what happens here. So charge will be coming in here, and it will be coming out on the other side, but we are only interested in the charge coming in on this outer boundary. Okay, so now what is this total charge? So we will be changing this flux over some very long period of time from 0 to 2 pi. So we integrate the Hall current over this time. The Hall current is, by short calculation, equal to the Hall conductance times the rate of change of this flux. And so if you integrate it, the total charge moved through this boundary is just 2 pi times the Hall conductance. But the index theorem says that this charge moved across this boundary is quantized, and so in that way we do get quantization of the Hall conductance. Right, so that was sort of a very fake argument of the thing. So I talked about the Loughlin pump, which is supposed to be an adiabatic pump, and so we need, of course, that the gap stays open throughout this Loughlin pump. But priori we need this otherwise. Otherwise the parallel transport, so the unitary that gets you all the way around the Loughlin pump would not be local, and since we want this unitary to be plugged into the index theorem, it's a demand on this unitary that it is local, and in this local we can control its locality only if the gap stays open. But so assuming that the gap does stay open, what do we have then? So then we can indeed generate the parallel transport by this Hastings generator, and so quickly reminding you why this Hastings generator is good. It is because, so there is here again this function w, it decays very rapidly, and then there is time evolution with the local Hamiltonian of the derivative of the Hamiltonian with respect to the flux. But this lives only on this strip where the flux was threaded, so this entire thing more or less still lives around this one strip, and it is all nice and local. And these generators then generate this family of flux-threading unitaries, and the unitary which we want to apply the index theorem to is the flux-threading, a flux-threading of a full unit of flux, namely f at the value of 2 pi. So I've said all this, right? So all this works if we assume that the gap stays open. Of course, one of the major triumphs of the proof of Hastings and Michalakis is that they circumvented the need for the gap to stay open under threading of flux. That is the reason why the paper is so incredibly complicated. And we can also do a trick to get rid of this assumption. So let me quickly explain to you what that trick is. So instead of only threading flux here, we will also thread flux very far away over there in the opposite direction. So why is this good? This is good because threading flux like this, it's a unitary process. So let me denote the flux-threaded Hamiltonian that has the both of fluxes by H tilde of pi. This is obtained from the original Hamiltonian by conjugating with these unitaries. So this entire family of Hamiltonians is related by unitaries. Therefore, of course, the spectral gap remains open, and we can use the machinery of Hastings to write the flow for the ground state, the gap ground state of this family. In particular, this flow will always be local, and it will be localized around the two strips where the two fluxes are threaded. So in fact, this flow can be found explicitly in this case. It doesn't matter very much. This is the solution. And interestingly, the solution for any phi involves only the automatic generator at zero flux. This is already very nicely anticipating that the end result will only depend basically on the Hamiltonian at zero flux, which is, of course, the Hamiltonian that we are physically interested in. But now what do we do? So this is the Hastings generator for this process. It splits apart in two terms, namely the one corresponding to threading flux here and the one corresponding to threading flux there. And now we propose to take, as our unitary to apply the index theorem to, we propose to throw away k plus and keep only k minus, set in another way, so we're to buy the full adiabatic flow for threading flux both here and there splits into a product. And we just take one of the factors as our unitary to apply the index theorem to. And indeed, since locally this flux threading is morally exactly the same as literally only threading flux here and not doing anything there, it is reasonable to expect that this does indeed give the Hall conductance. And indeed it does. And so that way we can also circumvent assuming that the gap stays open under flux threading. Yes, so that's what allows this process to be unitary. And the point is that it's very far apart, so, and then you have good control of locality using the Hastings generator and that makes everything work. So you think you'll make this work without a gap? I mean, you said you can circumvent the gap. No, you have still the gap assumption for the zero flux there. Right, at zero flux there must still be a gap, certainly. It's my religion, but it's okay. It's okay. It's my religion. You don't like the word zero at zero flux? There is no natural. Right, no, no, I agree. No, no, so let me rephrase that. At zero flux relative to the Hamiltonian that I'm physically interested in. But there is no physical meaning because of the fact that depends on the choice of a loop. Depends on the loop. Yeah, yeah, but once I have chosen a loop I can calculate the flux for the Hamiltonian that I'm physically interested in and then the flux that I'm speaking of here. If I have threaded it, then I again calculate the flux across the loop. The difference with the original will be that flux. So when I say zero flux, I mean that difference. There is a value of the value of the flux. Yeah, and so there is a value where there's a gap and that is the Hamiltonian that I'm physically interested in. So you only need the assumption that zero is the one value of the flux. Yes. Under what conditions can a gap close? Under what conditions can a gap close? But since there's no natural origin of the flux. Well, I mean that. And the gap is, if there's a gap in one value of the flux, there's a gap in another value of the flux. Well. You are in guarantee by giving that for Neumann. Yeah. That in it, you know, if you break time with us, you change two parameters, you don't close the gap. You need to change three parameters. It's not a, it's. So you are circumventing anything, even, but the basic assumption there is a gap. Well, right. I mean, so of course everybody believes that the gap remains open under flux threading, I think, but there is no proof. So that's why, yeah, okay. That's why I want to say this. Well, if you can turn it into proof, that's another way to do it. Right. So the reason why this trickery is necessary for a mathematical audience, I guess, is because there is no proof that the gap indeed does stay open. Everybody believes that it stays open, but unfortunately no proof. If there's no special value to it, if there's no special value for zero or flux, then. No, but there's. The special value is provided by the physical system you're looking at, right? The physical system doesn't have this structure anyway. That I don't see. Oh, you mean it doesn't live on the torus, etc., etc. Right, that's a short bit. It's sort of. In force space. Sorry? It's done in force space. In force space. Well, then there's two sides to the torus. I'm sorry, I'm not following that. Okay, I will continue, I think. So the next application is this Avrundana-Zak relation, which is a relation between. Well, okay, let me go to the next slide for that relation. Okay, so I'm going to go to the next slide. Okay, so I'm going to go to the next slide. Okay, so I'm going to go to the next slide. Okay, so I'm going to go to the next slide. Avrundana-Zak relation, which is a relation between, well, okay, let me go to the next slide for that relation. So we are now considering again a system living on the torus. Now I want the dimensions in the two directions to be a pretty different. So there is an L1 and an L2, the length of the torus in the two in equivalent directions. And each, so there is here the whole state exists by virtue of piercing a magnetic field through this torus. And we take it of a strength such that it has a flux 2 pi N over M per unit cell. And then the following is true. So the filling factor minus basically the flux per unit cell times the quantum Hall conductance is quantized. And so this Q again is the grand state degeneracy. Right, so this was first shown for free fermions by Avrundana and Zak. So just to think about how we will do this, of course, we have just seen that we can get Hall conductance from this flux-threading business. Sven has told you that you can get the filling factor from translations in the translation variant model. And so we will combine the two and see that indeed this relation follows. Right, so one condition that I had not mentioned yet. Well, it will, yeah, so no, let me not go too deep into that. So we are considering an L1 times L2 discrete torus. And for simplicity, consider this sort of model on it. So again, there are hopping terms. And there is a Fermi level in the gap and there is some interaction. And these phases model the magnetic field piercing the torus. And so we demand that these hopping terms now are translation variant. That is what allows us to do something with translation invariance. Although the model itself, of course, is not translation invariant due to these phases, which model the magnetic field. So we choose the vector potential modeling this magnetic field piercing the torus in lambda gauge. And let me just draw that pictorially as this, like this. So this is a part of the torus. This line is the coordinate line x2 is 0. And then the vector potential modeling this magnetic field piercing the torus is just, yeah. So for each arrow, there is a flux of vector potential. So indeed, when you walk around the unit cell, you see that you pick up basically the value of one arrow, which is one unit of flux, and that is what we wanted to model. Now, so I was telling you that we want to use somehow translation invariance and use that in the index theorem. But of course, this vector potential spoils translation invariance in the two direction of this model. So basically, if you apply the translation in two directions, of course, everything shifts, it's not the same. But we can compensate for this by threading a lot of flux through this torus. So how does that work? If you now thread a lot of flux, not the way I did it before for the quantum hall, but sort of uniformly everywhere. So how will I thread this flux? I will model it by just adding arrows on each bond. And how much flux have I threaded then? I have threaded five times the number of bonds in this direction around the torus of flux. So five times L1 units of flux, I have had to thread to get back to the original picture. So I mean, of course, everybody knows this. This is just the fact that this model is invariant under magnetic translations in the two direction. Again, if we assume that during this flux threading, the gap remains open, then we can let the machine reroll again and have that the ground state is invariant under translation followed by the parallel transport of this flux threading for the appropriate amount of flux. So why do I write it now like this? Because after translation, we have obtained a Hamiltonian and therefore a state that relative to the start has a flux minus phi L1. And this flux threading process gets us back to the start of zero flux relative to the initial Hamiltonian. So this unitary operator leaves the ground state invariant and we can use it in the index theorem and then see what we get. So to get something out of that, we use good property of the index theorem. Indeed, a property that if it were not true, you would rightfully complain. The index theorem is additive. So if this is the index and I'm dropping it. Which index there? So the one that Sven talked about and then I reminded you of in the beginning of this talk. So the many body index theorem that we are talking about here. So I've dropped the ground state projection from the notation. But additivity of the index means that if you have two unitary processes that satisfy all the assumptions of this index theorem and the product and the index of the product is the sum of the individual indices. So let's apply that now to this process we just cooked up this magnetic translation. And in fact, let's do this n times. So maybe I should tell you already why we do it n times. It's because in the end we will get translation in two directions n times. And that does leave this Hamiltonian invariant. So to see that, right, so if you do it n times, then for example, here there were originally no arrows. But afterwards there will be n arrows. But n arrows means n times unit of flux. Oh, sorry, it should be m. Okay, it should be m. So think that thing that here is written m over n. That is better. So then multiply this by n. So this is multiple of 2 pi. And therefore in these, so the phases are invariant under this n-fold translation in the two direction. So that is why we consider the nth power of this process. Now, we try to disentangle this. So the next slide I will show you how that works because it's sort of fun. This is true. So you can disentangle this product into threading a lot of flux and indeed translating in the two directions by n steps. Then applying additivity of the index, you get the sum of the index for flux threading plus the index for translations. And in particular, so this n-fold translation does leave this ground state invariant. And what you get is here, so here you have threaded l1 times m units of flux. So you get l1 times nm times, oh, no, it's, oh, yeah, yeah, sure. l1 times m times the hull conductance. And the other term gives you n times the filling factor. Now, because we took the power n here, this n times sum integer divided by the ground state degeneracy. Now, since this must be true for arbitrarily large l1, we can in fact drop this l1 here and that does give the desired result after dividing by n. So we're done. So the only thing I still have to convince you of is that this thing is in fact equal to that thing. And that's a huge calculation, so why not? So I think you understand how the filling factor came in. So Sven talked about the application to Lipschul's mathes. And there the clue was to look at something that is related to translations. So here you look at translations in two directions. And morally, the index is then the charge in n slabs of this torus. But because you have translation invariance basically in all directions, you can divide this by the volume of that slab and you do get the filling factor. OK. So to get from here to there, just note that under translations, as we have seen a few slides back, basically translating in the two directions is the same as adding phi times l1 of flux to this Hamiltonian. Now, from this, we can see what translating does to the Hastings generator. And in fact, it does exactly the same. I have forgotten the k here. So k transforms in exactly the same way as the Hamiltonian. And then you can see what is the differential equation satisfied by the translation of the flux-threading unitaries. Remember, so this always relates some value of flux to an order. So there are two parameters there. So taking the derivative, by definition the Hastings generator was defined to be, well, so f was by definition generated by the Hastings generator. So this is what you get. Inserting identity here, you sandwich the Hastings generator by translations. You use this and you get this differential equation. But of course, this flux-threading thing is uniquely determined by initial conditions and such differential equation. So it follows that, in fact, the translation of a flux-threading operator is the same as threading flux between different values of flux shifted by phi times l1 units of flux. So what do we get from this? Of course, we will move this translation here and we can start commuting things through. And indeed, so now we write this operator that threads a lot of flux as a product of operators that thread phi times l1 of flux. And then putting t2 to the n here, we commute them through one by one over fewer and fewer of these factors. And in the end, we do get the desired result. So that's how it goes. And I thank you for your attention.
As an application of the many-body index (see Sven Bachmann's talk), the Avron-Dana-Zak relation is shown to hold in the context of interacting quantum lattice systems, for the integer and fractional quantum Hall effects. A key formal property used to obtain this result is the additivity of the many-body index.
10.5446/56101 (DOI)
the hotel room last night. I don't know whether you guys had similar experience. They had an outburst of facts last night. OK, so today I want to talk about a topic which is really to actually to quantum phase transition. So I realized most of the previous talks focused on quantum phases, such as topology in Sweden, and some of them topological orders. So today the topic is slightly different. I want to talk about certain universal property of phase transitions. Is this working, actually? I want to talk about certain universality of phase transition between topological phases. So let me first give you some motivation. Why am I interested in this problem? And what precisely is the problem that I want to study here? So by the way, I'm a kind of smatter physicist, so the language I use may be slightly different. So if you have any question, feel free to ask me. I'll also try to explain some mathematical concept. Try my best. Yeah, in here. So one of the cornerstone in quantum as a matter of fact is actually Langdou's theory. So in Langdou's theory, in the time of Langdou, he believed that phases of matter are described by their symmetries. So therefore, if you want to describe phase transition between two different phases, you want to describe how the symmetry of the two phases changes. So these are the so-called Langdou type of phase transition. So take one example is the most convenient example that everyone are familiar with is for example, one-dimensional transfer of your isymodel. So you have isymesping which pointing to z direction. So they can either point to up or down along the z direction. So you have nearest neighbor interaction between spin along the z direction. Oh, actually. So in this talk, I want to use x, y, and z to label the polymetrices on every side. So this term will be z, i, and z, j. So this term is completely classical, of course. Every term here commutes. But you can add some dynamics here by putting in a transverse field. So by tuning this ratio of the two coupling constant, h over j, you can see there is some kind of translation. So if you tune the parameter of h over j, so you can show when h equal to j for example in one dimension, when h equal to j, you go through a translation. When h is small, you are in a so-called ordered phase. So in order phase, basically the zi acquire expectation value. Or more precisely, if you measure correlation function of z and zj, this becomes 0. Even if i and j, the distance goes to infinity. So this is in the ordered phase. So when h is large, you can see these terms are to flip the spin from this z direction. So it's flip up to down. So therefore, when this term dominates, you get into a disorder phase. So in a disorder phase, you don't have expectation value. Equal to 0. And all the correlation functions are short-wrenched. So there will be a transition between the two phases. So in a disorder phase, apparently the system preserved only one symmetry in the system. So only one symmetry here is, let me call it rotation x, to be a product of order spin along x direction. So i and j label different lattice side. So this obviously commutes with the whole Hamiltonian. So in this order phase, where order spin is pointing along x direction, we preserve the symmetry. However, for the order phase, it has a spontaneous breaking of the symmetry. Because if you look at the ground state of order state, in a very simple picture, you have 2D general ground state. One of them is everything pointing up. The other one is everything pointing down. So separately, they are not in very under this symmetry. So in particular, this is which you map this ground state to this ground state. You can further form a superposition of the two ground states, which is everything spin up point. With the superposition, it's everything pointing down. In this case, you can show this symmetry operator acting on these ground state. These two D general ground states give you eigenvalue plus and minus 1 for these two ground states. However, these two ground states are generally a thermodynamic limit. So therefore, we call in this case, a spontaneous symmetry breaking, SSB short for the spontaneous symmetry breaking over there. So that means although the Hamiltonian actually have the symmetry, the ground state of the system break the symmetry, the thermodynamic limit. So that's the kind of phenomenon I'm interested in today. So in the usual Langdahl theory, as I have said, Langdahl believed that faces are a matter described by their symmetry. So if you start from a phase with symmetry g, in order to go into a different phase, the symmetry has to change. So generally, g goes to some other symmetry group H. And in this case, generally, H just can be any subgroup of g. And the phase transition between these two faces with symmetry g and with symmetry h are described by fluctuation of local order parameter. In this case of E-sync model, the local order parameter are nothing but the expectation value of the spin C by Z on different side. So in particular, more generally, the order parameter of the system leaving this coset space of g quotient H. So in the case of Langdahl theory, all the translations between two different Langdahl faces are described by low energy fluctuations of the order parameters in a Langvelder's limit. So that describes Langdahl type phase transition. So in the case of Langdahl, two different faces, they can have different symmetry. But there is one necessary condition is that the group of the other phase has to be a subgroup of the one phase. So there is a subgroup relation between the two different symmetries. And this H generally can be any subgroup of g. So now the question I try to ask is the following. So in Langdahl's theory, Langdahl assumes symmetry is the only thing described in the system. But we already know there are many things defined in the Langdahl theory, such as, for example, a topological insulator, which has been discussed a lot, and also topological orders, which are described, for example, by Michael and Bruno, among them. So the problem I try to focus on is a problem interplayable symmetry and topological order. So the question I try to answer is the following. Imagine initially I start from a topological order. Let me call it C. So this C mathematically is described by so-called unitary modular tensor category. So it contains some object, object A. And it also contains some morphism for these objects, some morphism. So this morphism, for example, includes fusion and braiding of this object A. So if I start from a topological order, C, if I go through a transition to another different topological order, D, imagine initially the topological order C preserved symmetry G. Now my question is, can G be preserved in this new topological order D? So can G be preserved? Or if it's not, what are the possible symmetry group H, which is compatible with this continuous transition between these two topological orders? So I will make this more precise later. So in this case, I will focus on one special type of quantum phase transition between the two topological or C and D. So the type of topological transition is something people call enion condensation. Translation. So here I need to emphasize that although I'm talking about enion condensation, the type of enion that I condense in a system has to have to be both studies, meaning the twist of this enions, which are condense, has to be trivial. So those are the requirements of this enion condensation transition. I'm going to give a more precise definition for enion condensation transition. But let me first use one example, which is a toricode example, which has been mentioned a lot previously to demonstrate what do I mean by enion condensation. So let me remind you of the toricode model. So in toricode model, you have a spin. Polymetrics is leaving on every link. So toricode model contains two terms. So one of the terms is so-called a vertex term. So toricode model contains two terms. The first term is vertex term, which is eniosum over all the vertices. And there are four spins, which has in the neighborhood of these vertex. So you now have a product of all the spins on these four vertices. And you have, let me call the link, let me label the link to be L. L belong to this vertex. And you have a product of a spin sigma x for these four links. So that's the first term, the vertex term. Then secondly, you have a placket term. You have every placket. There are four spins on the boundary of the placket. And you have a product of all the links on these four boundary of the placket. And you have a polymetrics z along this direction. So these are the two cons ring on the toricode model. Once you put this on a torus, you can show there are four four degenerate ground state. And as Bruno has commented, there are four super selection sectors in this case. One of them is the e particle. E particle is the violation of this vertex term. There's also m particle, which is a violation of this placket term. So in the case of unitary modulate tensor category, the object in this topological toricode, c-toricode, you contains four different super selection sectors. So they are generated by the e particle, which is violation of the vertex term. And the m particle violation of the placket term, there is also a fermion, which is a fusion of e and m. So this one you can show there are very fermion statistics. And the braiding of this system described by the following, so if you have two different particles, say e and m, if you braid one with the other, you always get a minus one phase. So any two different objects has a minus one braiding. So that's the data for the unitary modulate tensor category for the toricode case. So now in this case, as Bruno has commented, this model has no dynamics. In other words, if you look at the excitation spectrum in the system, they are basically flat band. You have, for example, an athorus, you have 4D general ground state. On top of the ground state, you have a very flat band. This, for example, has a gap for 2. So this gap of 2, meaning it has a pair of, for example, e particle. They all have the same energy. And similarly, for a pair of m particle, they also have exactly the same energy. But you can actually change the coefficient here, too. Make this flat band go into two different flat bands, for e particle and m particle. But the point is it's completely flat. So now what we want to do is to add some dynamics into this quasi particle. So the simplest way to do this, and is also the most extensively way that has been studied so far, is to add a magnetic field in the model. So what you do is you have a perturbation of the Hamiltonian, delta H toricode. In this case, let me choose to add a magnetic field, which is along z direction. So it's minus H sum over link, zl. So if you still remember, among these talks, so if you have a z spin acting on the ground state of the toricode, the z spin would actually violate, for example, the z spin here, would violate the x. So the vertex term on this vertex and also the vertex among this vertex. So therefore, if you have a single spin z on this term, it's going to create a pair of e particles on this vertex and this vertex. So in other words, and also because this e particle has a z2 fusion rule, equal c equal to 1, creating a pair of e the same as hopping one of the particles from here to here. So therefore, this term basically gives you some kind of kinetic energy for the z particle. Does it change the ground state space? No, actually, as long as this term is small, the ground state is the same. Yeah, you still have these for the general state. For infinite testing, sorry, for a thermodynamic system, they are still exponentially degenerate. But for finite system, there will be degeneracy. And that will be changed. Sorry, there will be splitting and that will be changed by h. So in this case, so basically what I want to say is this h term give you some. I'm asking whether it actually changes the microscopic form of the wave function, which does, right? Yeah, yeah, that's right. OK, yeah. So the many-body wave function will be changing in that case. So in fact, this is also exactly so. Oh, no, not exactly so. I should say it's mapped to another familiar model. So if you have this perturbation, you can map these two transfer field IC model in two plus one dimension. So this is basically a similarity of transfer field in that case. So you do change the many-body wave function. Yeah, thanks. So yeah, what I want to say is this term give the e particle some kinetic energy. So as long as they give you some kinetic energy, we know the e particle starts to disperse, right? Initially, it's a flat band. They are localized. Now they start to disperse. There's some dispersion. So when this h term is large enough, so in particular, it's the order of one, then at some point, this dispersion minimum is going to touch zero. OK, so that's going to drive you a phase transition. This is when the gap closes. And when we say the gap closes, there is a specific gap that closes. That specific gap is the gap for e-acitation. So this is when we say the e star to condense. This means the gap of the e-acitation go to zero. So in this case, we call it is to the e condensation. And generally, you can also push this concept to other topological order. Once you have some excitation spectrum, you can always push them to zero energy, and then they start to condense. OK, now one question you may ask is, what happened to this e condensation? Even if I have the e condensation, what does the neogram say look like? What does the neo wave function look like? So now let me give you a rough picture of what does it look like. And why do we call it e condensation? So in a physics context, I think one concept we are very, very familiar with is boson-Einstein condensation. In the case of boson-Einstein condensation, if you try to write down the ground state, the ground state of the B, C, I should say more precisely, a superfluid is actually a superposition of different boson particle number with some amplitude r phi m times a dagger for the power of n acting on the Falk vacuum. So it's a superposition of different number of boson for bosons. If you have a superposition of all these states, it gives you actually a superfluid ground state, which have the condensation of boson. OK, so therefore, in the case of a usual boson condensation, you have a superposition of many different boson numbers. So here, in the case of e condensation, there is also a similar analogy like that. So in particular, let's think about one very simple limit. When h goes to infinity, when h goes to infinity, we know the ground state is basically all that's been pointing to z direction, right? But I want to write this way function in a slightly different form. So in this case, the new ground state is basically all that's been pointing to z direction, so all that's been pointing up. I want to write this in a slightly different way. I want to write it as a projection on the original ground state up to some normalization. I have a product of all the link 1 plus zl acting on the original Tori code ground state. So we know Tori code ground state is basically a superposition of many spin configuration. But now if I just do this projection, all the spin just pointing up. So now I want to start to expand this term. So I have many of these zl's showing up on these links. So every term zl, we already know it's physical meaning. It means I create a pair of e on top of the ground state. So therefore, this state is going to give you a creation of many, many e particles everywhere in the system. So therefore, in some sense, this is a superposition of the open string configuration. So when I look in the case of Tori code, in the so-called 11 string net condensation picture, the Tori code ground state can be viewed as a superposition of the closed loop configurations. And every closed loop meaning, so I define closed loop by zl equal to, sorry, xl equal to minus 1. Sorry, xl equal to plus 1. If I have xl equal to plus 1, every link I call there is a loop there. Sorry, there is no loop there. If xl is minus 1, there is one loop. So the Tori code ground state has been proven to be a superposition of all the closed loop configuration. But right now, if I add a zl operator acting on the ground state, Tori code ground state is going to create an open string at that state. So therefore, this new ground state will be a superposition of all the possible open string configuration. For example, I can have one open string with this lens. I can have another open string with this lens. I could have two open strings here and here and so on. So in this extreme limit, it's going to be a superposition of all the possible open string configuration. So that's literally what I mean by e-conversation. I have many e-particles just moving freely in this ground state. So I want to give you a picture of what is conversation. So now let me use a more rigorous mathematical definition for this conversation. So the rules of any e-conversation is given by the following. So a mathematical reference for this one. I think this one is first really summarized by Liang Kong in the following reference. I mean, many physicists has worked a lot of example for any e-conversation. But I think the mathematical formation summarizing here. So in order to have any e-conversation, I have to first have a so-called condensate of algebra. I'm going to use Tori code to demonstrate this. So condensate of algebra is basically some object inside the category C. So this object is generally a composite object. It's a superposition bunch of simple object. So in the case of Tori code, for example, a is 1 plus e if I want to condense e. So it contains two objects, 1 and e. And there are some rules for this conversation, algebra. Is that it's so-called connected at all algebra. So at all means commutative and separable, algebra. Yeah, so I'm going to explain what exactly does this word mean. So it's not just algebra actually. There is also some morphism associated with a. So one of them is if you have 2a, there is a morphism which map you to 1a. There is also a morphism which maps 1, the original vacuum in the category 2a. So let me call this i. So these are the morphism associated with this condensate of algebra. So first of all, I want to explain what does commutative and other things mean. Commutative means the object in a should have trivial braiding with itself. So in other words, this one is the same as this. So if your performer braiding should be the same, this is commutative. So it physically means the object you condense must have both some statistics by themselves. And mutually, they should also have both some statistics. I understand. Sorry? Maybe. Why? Yes. Oh, yeah. So that's a very good question. So as we said. Why not? Why not? Yeah. So when they have both some statistics, meaning if I braided the other one around the other, there is no face, right? Noisier face. So as we said, this is like a superposition of all the open strings. You don't want all the open strings actually cancel out each other. For example, you could have something like this. This is also an open string, right? And you could also have something like this with this. So you don't want these two things to cancel out with each other. So if the braiding is not trivial, this amplitude is going to cancel out. So to the? It's not possible to have something like the fermion state for something. Yeah. The finite density? Free fermion state, finite density. Would that be? You mean you want to excite the fermion excitation in a Tori code. But that's not something we call condensation. I don't know. Yeah. Maybe not, but it seems like an equally interesting base transition. That's true. Yeah. OK. You're right. I should make a comment on that. So any kind of condensation is only one type of possible face transition between different faces. So there are some things which we know for sure does not describe by any kind of condensation. So one of them is transition between different SPT faces. So for example, if you have a fermion, I have number 0, try number 1. That transition is definitely not any kind of condensation transition. So also, if you have a transition which changes the chiral central charge of a topological order, meaning the central charge of the edge state, the number of central charge for the right mover, subtract the left mover for topological order, if this transition changes, chiral central charge is not any kind of condensation transition. So therefore, for two faces on both sides of any condensation transition, it must have the same chiral central charge. So there are many transitions beyond any condensation. I'm not claiming this is the only transition. But this one is a transition which somehow is better understood. So therefore, we can draw some nice conclusion on top of that. Of course, those are also very interesting questions. And I think, mathematically, I don't think there are very general interacting framework to address those transitions. So you guys are very welcome to think about those things. So this is for commutative. There is also another property for connected. Connected basically means the vacuum here shows up once. So dimension of mathematically, arm space for 1 and a is 1. So I should say, physically, you should really think about this as a superposition of many states. Every state has some dimension. You have a direct sum of the different state. And that is how you should think about this commutative algebra. And more precisely, physically, what you can think about is if we just focus on finite region of the system, you ask myself, in that finite region, what are the superselection secret that shows up? As I said earlier, so you have all the possible open strings. So therefore, in this finite region, you could have no strings opening in the region. So that give you a vacuum sector. And you could also have one string which terminate at the region that give you an E sector. So you could have one plus E both of them. So that's how you should physically think about it, the condensation algebra. Finally, there is a condition for separable. So separable, the definition is there exists a map. Let me call it tau from splitting map from 8 to this, such that you combine this map tau and the map mu, the fusion map, which give you the identity map. So well, I have to say I have less understanding of this mathematical rule. But I can give you a physical example to illustrate what the separable means and where this word separable comes from. So now let me give you an example which is, OK, let me first work out the Torico example. So yeah. I think there should be another condition there is a forbidden algebra, which I think it means that the composition of AAA is associated. Yes, yes, that's right. You're right. I didn't describe those things. Yeah. So this has to be associative. Yeah. Yeah, what I mean is here, that's correct. That's definitely correct. So this condensation algebra is described by basically an object and the two morphism. And they must satisfy a social activity. So the associator, all I should say, it's really come from the f symbol. The associator for AAA. Yeah, so these maps have to satisfy this one. Actually, there are other conditions you can return down from these sort of diagrams. I'm skipping all those. Yeah. So OK, so let me first use the Torico to illustrate what does this condensation do. And how do I understand this new phase D that I obtained after the condensation? Yeah. OK. Yeah, let me erase this part. So in a following, I'm going to give you some rule which determines what is this other side of the phase transition. Once I have a condenser algebra, A. So mathematically, this D is defined as local A modules. So let me basically give you some physical rule to determine this one. I can also write down the definition for this one. OK, let me do that. So a module in particular, let me define a left module of a left A module in the following way. So I want to have, again, a set of objects M and also a map, which is mu A on the left. So I want this map satisfied the following condition. So I'm going to explain what physically that this means. So this is a map mu A L. So this map tells you basically, so if I have some object originally in a category C, so right now, I said A is condensed, right? So this means in a new phase, A is really the vacuum. So therefore, initially, if I have two objects in C, if they differ by some object inside A, they should be identified. So that's physically what this means. And a local M module means the object that in this module must be local with respect to A. In other words, local A module satisfied the following condition, of M and A I could have from Brady. So this will be M, OK. This will be A and M. So basically, this one is the same as M and M and A. So meeting the object, that's quite after the condensation. In this new category D, they should have a trivial braiding with A. That's a physical meaning for local M module. So in the case of Tori code, let me explain what do I mean. So originally in Tori code, I have four objects. So it's 1E and 1M. So in this case, if I try to condense E, so first of all, I need to identify the module. So left on A module. So we know it has two objects, 1 and E. So in addition to the object C, also have E and M and Epsilon. Epsilon is E cross M. But I know, as I said, any two different objects which differ by something inside the condensate algebra should be equivalent. So therefore, these two things are the same. You should identify them. And then another rule is you need to require local. So you need to look at these objects. Do they have trivial braiding with E? So in this case, both M and Epsilon, as I have said earlier, they have non-trivial braiding. So therefore, a physical degree of code, they are confined. So therefore, after the condensation, you don't have anything non-trivial. So you don't have a trivial category. So these are the trivial categories of the back. So this is the simplest case where it applies to Tori code. But the simplest case doesn't really explain this separable property. So now let me use a slightly more different example, a more complicated example to explain this separable property. So now let me talk about example which is a category, which is something people call E-sync cross E-sync bar. So physically, it corresponds to, you can think about you have a P plus IP, superconductor on the top layer, and P minus IP superconductor on the bottom layer. You consider your pi-flags on both the top layer and bottom layer as your excitation in the system. So therefore, on the top layer, you have an E-sync, on the bottom layer, it's E-sync bar. So you have one psi sigma plus one psi bar sigma bar. So psi is a fermion on the top layer. Sigma is the pi-flags on the top layer. Psi bar is a fermion on the bottom layer. And sigma bar is a pi-flags on the bottom layer. So bottom layer is P minus IP, top layer is P plus IP. So now on one condensable algebra in this case, you can show it's one plus psi psi bar. How do you do that concretely? You can actually construct the exact solvable model for this case. So in some sense, you can think about gauging the, if you have the Z2 fermion parity in this superconductor, you can gauge the Z2 fermion parity. You make the pi-flag become dynamic excitation. Or I should say, physically, couple them to external UN field. If you give dynamics to the UN field, then you get the pi-flags there. Yeah. So but right now, so far, the. You complained yesterday that this is not what you think of. You should think of it dynamically. Yeah. If you couple the dynamical, you engage with. I don't know who made that comment. No, no. For the other one. It must be Ashwin. OK. Yeah. All right. So now I'm condensing this kind of condensable algebra. So you can see I condense psi psi bar. So finnically, this means I want to tunnel psi and psi bar from the two layer. So you should think about p plus, p layer, and p minus, ever start to talk to each other. OK. So once they start to talk to each other, as you will know, they have trivial topology index. So therefore, after the condensation, if you expect, if this is my C, this original category, the new category D, which is C, a low key in this case, should be just Tori code. OK. Tori code is a trivial superconduct hypervisor Z2-Ga ratio. So now I want to really illustrate this. And I want to use for this a separability. Sorry. Yes. So you describe nicely this condensable. Densable algebra. You didn't say how D is actually. D is C, a low. Yeah. D is defined as the local A module. S, A, low, not dot. The category of local A modules. S, A, low, not dot. Yes. OK. Yes. Yeah. So basically, the object which are not confined, and they are identified with respect to A. Yeah. So we want to think about this case. So in this case, again, the first step is to identify, first of all, the local object, right? We said it has to be local A module. So it has to be something with trivial braiding with psi-psi bar. Actually, we immediately know sigma has an untrivial braiding. Sigma braiding with psi give you a pi face, because it's pi flux. Fermi goes around and gives you a pi face. So therefore, sigma is confined. Similarly, sigma bar is confined. Sigma psi bar is also confined. And sigma bar psi is also confined. OK. So those are gone. And now what you are left with is 1 psi psi bar. And sigma sigma bar. I think that's the only thing. Oh, and also there is psi psi bar. But psi psi bar, we know it's just the same as vacuum. So let's forget about psi psi bar. So we know if psi psi bar is the same as a vacuum, psi is the same as psi bar. Because psi is the same as psi psi bar, feels with psi bar. So if psi psi bar is vacuum, psi is the same as psi bar. So taking a module, meaning these two are identified. But now that you are left with this thing, yes? I forgot. Did you ever mention what the fusion rule was about for this? No, not really. OK. Yes. Yes, thank you. So the fusion rule for this one is, OK, these two copies, I think and I think, but they are separate. They are independent. So the fusion rule here is psi cross sigma is sigma. And sigma cross sigma is 1 plus psi. And psi cross psi is 1. Two fermions become a boson. And two pi flux can either give you all the fermions parity or even fermions parity. And a pi flux combined with fermions is also pi flux. Yeah. OK, so now you're left with this object. But here there's some peculiarity for this object. So if you compute the fusion rule for this object, what you can work out is, according to this fusion rule, is you get 1 plus psi psi bar plus psi plus psi bar. OK, so as we have said, after the conversation, psi psi bar is the same as 1, right? After the conversation. So therefore, you have a dimension two space for the vacuum sector. So this means this fusion rule suggests psi psi bar is its own inverse, its own antiparticle. But you have a dimension two space for the fusion channel into 1. This is not allowed in a UMTC. So you want this to split. So what I mean is generally in a UMTC, if you have a bar, the fusion rule for the 1, this channel, the multiplicity has to be 1. So therefore, this is not allowed. You have to split this. So way to split this is to count the dimension of sigma and sigma bar. You figure out the dimension of this one is 2. So they split into two objects, which is nothing but E and M, E plus M, in the Tori code case. So this is how you ended up with the Tori code. So this psi and psi bar become epsilon part of the Tori code. So psi psi bar is split into E and M. So, sorry? What are E and M in your original category C? E and M, you don't have them. It's like a sigma-sigma-y is a combination of E and M. So when you have to write it's a C, a log, you have to enlarge the category by adding some kind of direct summons, some kind of completion to get E and M. You need to allow an object to split after you take the local M module. And there are certain rules for the splitting to happen. You need to satisfy all the rules, and then you get this. This is the. Like item, code, and completion of the category where you add direct summons of an object. If there's a projection of an object, then you want it to have an image or something. That is probably true. I cannot answer that. Yeah. My collaborator probably can't. Yeah. Sorry about that. OK, so this is, I think, what they call separable for that algebra. OK. So now I have explained what is any accusation. So now the next thing I need to explain is the symmetry action for the topological order. So I think, OK, I'm going to also spend a considerable amount of time explaining how does symmetry act in a topological order. So far, I have specified, if I start from category C, if I do any accusation of if I condense A, what is the category D, the new topological I get. But now the question is, if this topological order has symmetry G, does this one preserve G or not? Yeah. Or is there some abstraction for this one to preserve G? So that's the kind of condition we want to derive. And in order to answer that question, I have to first understand, how does G act on a category? So let me also give you some introduction for that. So this subject in the physics has been studied a lot recently in the past five years, something like that. So this kind of phenomenon is something people call symmetry enrich the topological order. So this can be viewed as some generalization of the topological in-suitter concept to topological orders. So this is called a symmetry enriched topological order, or SET order. So mathematically, this problem reduced to understanding what is the G action on a unitary model tensor category, a group pattern. So this has been studied by an Ettinger of Nick Schich, Ostrich. And the reference is this one. It's fusion categories and homotopy theory, something like that. So basically what they study is G extension of fusion and braided tensor category. So this mathematical work has been sort of translated and putting a proper context in a physics situation by the following two papers mostly. Buck, actually, et al. So Kitab also have some published work on this subject. Yeah, so basically I'm not going to give the whole thing. I'm just going to summarize the main result and give you some physical idea of what does that mean. So I want to understand how does symmetry act on the enion in the system. We know in a topological order you have a bunch of enions, you have fusion and braiding. So you want to have some G symmetry action on the enion. You want the symmetry action preserve braiding, fusion, and F symbol. So therefore, there are basically two general things you can, in the end what you work out is that basically two things that can fully characterize how does symmetry act on the enion. I should say fully characterize a universality class of symmetry action on a topological order. The first one is a symmetry action row, which is a map from the symmetry group G to the automorphism group of the category C. So physically what this means is how symmetry permutes enions. So we know we can have different enions in the system. And I can ask myself, how does symmetry permute a different enion? Let me use that tree Tori code as one example. So in the case of Tori code, we know we have E particle, who sees some vertices. We have VAM particle, who sees some here, right? And actually, if you write a model in this way, where the two terms have the same coefficient, you can show the model explicitly have a symmetry. It's a transition symmetry. You can do this kind of diagonal transition. Let me call this TD, TD-Tutor. It's diagonal transition. Diagonal transition, it physically maps the position of E to a position of F. And more precisely, this diagonal transition is written as TD-Tutor equal to TD. TD is pure spatial translation, multiplied by global spin rotation. Because here is spin sigma x on the vertices. Here is spin sigma z. So you need to flip spin x into z. So it's E i pi over 4 sigma y, yl, sum over all the l. So that's a diagonal translation. So you can show this commutative Hamiltonian. And indeed, it switches E and M. So for example, in this case, you will call rho TD-Tutor would map E to M. Yeah, this is E and M duality, yes. It is really not a translation. Sorry? It's not a translation. No. Is it? Is it? No. It's a delay value. I mean, just a. Oh, you mean? In the super selection sector, there is just a. Yeah, I think what? But even there, I mean, it's like changing integer coordinates to integer coordinates plus 1 half. The physical model has a translation. You can show this just commutative Hamiltonian. In a periodic. In a periodic. Yeah, yeah. It doesn't have to be translation. You can also write it on size, symmetry, same micro as a paper and on some exactly sober model. Realizing that symmetry. I don't know what's saying if you take that TD, you don't land on the left, it's a left. Well, the really physical degree of freedom is just on the links. So this one map a link to a link. So it's OK. Yeah. Yeah. OK, so for example, TD action in Torre code. Switches in E and M. So this is the first level of symmetry action in this case. I should say in this case, there is an abstraction. Let me call it O3 abstraction. So abstraction belong to 3-co-psycho of G with coefficient in a billion and young of the system. So I really cannot explain this very well in some simple way. So physically means somehow sometimes certain permutation cannot be realizing a pure 2D lattice model by some unsized symmetry. You have to realize it is somehow on some surface of a three dimensional system or some other system. Anyway, not a pure 2D local Hamiltonian. And now there is a second level. Oh, is this the TD2? Is this transition? Is this diagonal transition? So you can do it a couple of times. Like the identity is so usually. Oh, it's Z. It's a Z-groove. Yeah, that's correct. I should call Z of, yeah. Yeah. Yeah, in this case, it's Z. But somehow secretly you can see it's like a Z2, right? Because when you go back, it becomes just a transition. Oh, what do I mean? If you do it twice, this becomes a lattice side become another lattice side. Looks like just a usual transition, which doesn't do too much. But yeah, in this case anyway. It's actually a symmetry of everything we're talking about, right? I mean, you can think of this as a diagonal square level as we look at the mechanism. Yeah, exactly. Yes, it's just a transition of that lattice. Yes, precisely. So I should say in that language, if you think about the whole thing as just a square lattice like this, this is rewritten as a so-called one-placket model. Shogun has a rewritten model. In that case, it's really just a transition. And he did show in that case the defect of these transitions, which we just desolation, is just a matrona, fermia, has some dimension square root 2. So just a transition means it's trivial, or you say something interesting about just a transition. Just a transition. It's not trivial because it's paramilitary. And I mean, in this case. Right. Yeah, yeah. So who z is acting effectively in this case? Yeah, yeah, that's right. You're not replacing it with a perspective. But somehow. We really should think of g as the z. Yeah, yeah. Yeah, in the physical model, yes. You should think about it as z. Yeah, yes, yes. But yeah, OK. OK, so the second level symmetry action is less intuitive as this one. It's so-called symmetry professionalization. Maybe I should just give a general introduction. So symmetry professionalization is you can also have some illustration in the Tori code model. So now let me give you some idea of what does this mean. So let me first make some assertion. So symmetry professionalization means the following thing. So meaning if you have some symmetry in topological order, the topological is a gap phase preserving the symmetry. Then you can somehow define a local version of symmetry action. Let's say right now I'm focusing on just unitary finite symmetry. So you can, if the system have a global symmetry, you can always have some version of local symmetry. So the version, let me call it UG local version of the symmetry. So let's say it's just at some region. Find a region in the system. So what you can show is the two action on this consider action of symmetry element associated with the element G and H equal to UGH log up to a phase omega GH. And this phase generally depends on what Anya is inside this region. For example, if I have Anya, A, this phase is labeled by Anya, A. So I have to have a more precise definition by this local symmetry action. This local symmetry action meaning the following. So in some region which is inside this boundary by larger than a correlation line system within this inner region, the action of UG log is exactly the same action of symmetry, global symmetry on the system confined to this region. It's restricted to this region. But near the boundary is somehow modified. So I can use. Yes, it commutes with Hamiltonian. Yes. So this is really an assertion. I'm claiming this UG local commutes with Hamiltonian. It's an actual symmetry in the system. I should say that they commute with some subspace, not the Hamiltonian. Hamiltonian restricts to some subspace where you have finite number of Anya's excitation. So you know the Hamiltonian has a whole spectrum where you can have a many, many number of Anya's. Let's say I have a finite number of Anya in the system. And there is no Anya on this region, on this boundary region. Anya only leaves either inside or outside. If I restrict to that subspace of the Hamiltonian, then I can show this actually commute. I should say this is a belief. It's not really a proof. Yeah. No. No. So if you have some Anya here on this boundary region, then you cannot really find such a low-coaction. So at least I don't know how to find that low-coaction. So let me actually use one example. Again, let me use Tori code. So after showing you this example, I'm going to show you a general picture for this so-called symmetry fragmentation. I should say this is actually a projective action of symmetry on the Anya. As you can see, this is a projective representation. So that's what symmetry fragmentation means. Yeah. I will say that there's also some well-known example in physics. For example, in the fractional quantum model system, we know the Caussi-Hou or Caussi-Lachlan-Kerri-Freschen charge. That's a very well-known example for this phenomenon. So let me consider the following symmetry. I have a transition, Ty, along this y-direction. So this is not a diagonal transition. OK, it's just one lattice transition. And let me consider another little weird symmetry. Let me call it Rx. So this Rx is the following. It's a product of all the links, vertical links. All the vertical links. And you make a product of the spin poly-x operator on the vertical links. Just on the vertical links. You can easily show this commuter as a Tori code Hamiltonian. Just on vertical links. Because you have a plaket term. And this plaket term has intersection with this vertical link and this vertical link. You have two of them. So in total, it's still commuter. And you can easily check this commuter as a Hamiltonian. Now you want to ask yourself, OK, how to define a local action of this symmetry? So locally meaning, imagine I restrict myself to this region. This is my local region. I'm, of course, having some extreme version of local region. I have two lines. OK, between the two lines, this is how I define my local region. I want somehow the symmetry act exactly like this symmetry. Like the global version within this finite region. But you know clearly this operator and this operator, a product of all these operators don't come into its Hamiltonian, right? If I just act it locally. Because for example, this plaket is going to anti-commutative this operator. So this plaket is going to be flipped. So therefore, the Hamiltonian doesn't really come into this one. But in this case, you can do a modification. Is that now let me? Yeah, is there any? OK. So let me use a line to label the symmetry action. So I have this line, right? Any line, any spins crossing the line, I will sign an xl on this line. So I have this line denoting my symmetry action within the region. OK, I have all these lines. Now if you know tory code, this line is nothing but the m string in tory code. The product of sigma x on these links is the m string. So now one way you can do this, to make this the actual symmetry commutative Hamiltonian is to close it at the m string. OK, you can choose this way to come here. Close it. And you can close it here. Closing this spin here. So I want to close it here. If I close it here, you can see it's just a product of a bunch of closed strings. Of course, it commutes with the Hamiltonian. So now you succeed it. But now you can see this action. This is really my local action. This is my urx look. We stick to this region, how it's defined. Now you can see this region also preserves ty, right, transition symmetry. So now you can ask, how do they interact with each other? So we know globally, rx and ty definitely commute, right, on the global level, on the whole system they commute. But if I restrict to this subsystem, I can ask myself the same question. What you can show is that if I do a translation acting on this local action of symmetry, it's going to make this go up. This one go up, right? So it's going to, let me draw the new thing. The new thing is going to look like this. This one is translated to this one. Every blue, sorry, every green one is translated to the red one, right? So it just goes up. So now you can see these two things just differ by it. Because every string is, this one is actually the same as this one after the translation. So what is left is really just a string on the boundary. OK. So what are your left withs is nothing but this one. Yeah. So more precisely what I'm trying to write here is, if I write this, it's equal to urx local, multiply a string, the boundary string. And the boundary string is, let me call it a string of m on the boundary, right? I have a string of m on this boundary and on this boundary. OK. So more precisely I have shown ty urx loke, ty inverse urx loke, inverse local symmetry. So I have proved the local symmetry commutative to the Hamiltonian. This one is equal to just the m string. OK. So now imagine you have an e string. You have an e particle sitting here, let's say. I have an e particle sitting here. And some string would create this e particle here. So if I do this total action on this single e particle, I can just use this local version of the symmetry action. If I do this, this e string would intersect with m string once. As I have said, e and m string has natural statistics. So therefore what I have shown is, in this case, if this total action, if just act on a single e particle like this one, you're going to get a minus one phase. So minus one phase is nothing but braiding of e and m particle on the boundary. OK. So using this example, what I have illustrated is, generally this omega a gh somehow can be written as some s matrix, which is a braiding of the enion you act a symmetry on together with some other enion. Let me label this other enion to be omega gh. In this case, omega gh is m. And what these people have proven is that generally for UMTC with braiding non-degeneracy, this phase has to be just some element of s matrix. Has to be true. And this omega gh has to be something a b-link enion. And so therefore what I have shown is this omega gh belong to the a b-link enion. Let me call it in a topological order. OK. In a topological order. So. We can take a few minutes. Yeah. OK. I expect you to spend a lot of time talking about these things. So now I have decided that this actual phase is really coming from some a b-link enion. So this phase must satisfy actually social activity. So if you first have three symmetry action gh and k, if I first do hk and then do g, and then if I do gh and k, so it has to satisfy a social activity. And which translated into this enion is nothing but a co-psycho condition. So you can show omega gh omega ghk equal to omega ghk times omega hk. So this is nothing but a co-psycho condition. So therefore I have shown basically the symmetry fashionization in this case. Defining this case is characterized by basically an element. Omega belong to an element of two co-psycho h2 of the symmetry group g. It was coefficient in a b-link enion here. Yes. So generally, if you also have symmetry permutation rho in this case here, then it's some twisty version of the group homology. But generally the symmetry fashionation classifies in this way. OK. So now I think I'm ready to talk. Yes. So for the toricode, is the billion hundred rubble to be just this? The whole thing is actually the whole thing. Because every particle is a billion enion. Yeah. So it can be the whole thing. Yeah. So now I'm ready to make some assertion here. So OK. So I should say let me apply that to the case of toricode. So if I apply that group homology to the toricode, this equation just means if in the language group homology is omega ty rx ty. So each one of them is not a gauge invariant. But the ratio is gauge invariant. This one is equal to m, because this is m string. It is one of the billion enions in the case of toricode. OK. So with this knowledge, now I'm ready to make the assertion of what our theorem is about. So our theorem basically has a following physical picture. Is that as I have explained earlier, after I do this enion condensation, the ground state is like a superposition of many open strings. So imagine I assume after the condensation, the state still preserves the symmetry. OK. As I have assumed earlier, if the system still preserves the symmetry, they must admit a local action of the symmetry. OK. So meeting locally if I acted it on the symmetry, the system must still preserve the local action of symmetry after the enion condensation, if it preserves the symmetry. So therefore, what I want to show is basically, after the enion condensation, I have a ground state, which is, for example, in the case where I have just one open string in this finite region. I have another case where I don't. I have two open strings in this finite region. I want to show this superposition of many, many different configurations. This is my new ground state after the condensation. I want to show this new ground state still admit some local action of symmetry in this case. OK. But remember, these particles, they become like a vacuum. This is like the vacuum of the new ground state. This is the ground, sorry, actually, the vacuum of the new phase, the ground state of the new Hamiltonian after the phase transition. So since it's ground state, there is no enion there, right? There's no enion there. So you expect the local action has to be trivial. It's not a projective rep, meaning in a condensed phase, because these enions become vacuum. So therefore, you want the symmetry basically to act non-projectively, but linearly, within the subspace of any enion which condenses. So that's physically the requirement. Now let me write down the mathematical answer for this, what way have derived the rules for the symmetry not to be broken after the phase transition. So there are basically two rules. So I'm only giving you a very rough argument. But that argument can be made more precise if I look at the physics on some edge version of this argument. And it can also be, I mean, my category has proved this within the category theory. But now let me state the conclusion. Conclusion is there are two conditions. And we think these are the only universal conditions based on the algebra structure. So actually, as I have said earlier, in the case of Langdahl's theory, the symmetry breaking pattern is purely determined by group theory, right? So you expect some kind of algebra answer to the solution. So therefore, you should expect some kind of algebra formula. In the end, I should obtain for the symmetry not to be broken across the transition. So there are two conditions. One condition is this condensable algebra A. A is preserved by symmetry. I should say A is stable under G action. So meaning A is a collection of any on that condenses. So if the symmetry G action permeates any on, I don't want it to permeate outside this subset A. I want the permutation to remain this A structure, so A is stable under G action. This is natural, right? For example, in the case of a Tori code, if I condense E, this diagonal transition has to be broken because it maps E to M. So it has to be broken. That's a prediction. So the second statement is that the symmetry fracturing has to be trivial. And that can be made more precise by looking at the following short exact sequence. So I should say mathematically, this is equivalent to saying there is a G invariant algebra structure acting on this condensable algebra. So there is a short exact sequence, which is automorphism A to odd of Cg by A, G to 1. So this is some extension of G. So I need to explain that a little bit. Yeah, sure. So imagine you assume the symmetry is not preserved. So it is preserved. Assume the symmetry is preserved. If I start from a series C, if I gauge symmetry G, I got a new category, Cg. I can also condense A. Then I get D, which is Ca log. In this case, I can also gauge G. I will get, let me call it, Dg. But now, in this Ia is some condensable algebra in Cg. I condense Ia. So I get Dg. So the claim is that this can be proved within categories that if basically this A means some equivalent algebra action of G, then this whole diagram commutes. Meaning if you first gauge and then condense, so if you first condense and then gauge, they are equivalent. Yeah. What symbol did you actually write in the middle of that exact sequence? Automorphism sub what? Automorphism Cg. Yeah, so meaning automorphism within this big category after gauging symmetry G. Yeah. What's inside? Ia is the new condensable algebra. It's a bigger condensable algebra after you gauge the symmetry. OK, but Ia, I guess, is defined by this A. Yeah, that's right. That's right. No, Ia is, yes, that's correct. So actually, you don't need to work out what is in the middle. You just need to know how do you extend the symmetry action G. Yeah. So actually, this part is really given by the cosecler. Yeah. So the same is the following. If this short exact sequence split, meaning if there exists a map from G to this middle group, such that pi is identity on G, this mean it's split. If it's split, this mean G can be preserved. If it doesn't emit a splitting, then G has to be broken. OK, so this is also equivalent to say, if splitting means exist sigma, such that it's my identity, it's the same as saying this middle group. Let me call the middle group to be E. This group to be N, meaning this E is a semi-dimensional product of N and G. Whenever it's a semi-dimensional structure, this means it's split. It's equivalent. So in particular, in the case where if this automorphism group N is obedient, so for example, in case of Tori code, A is 1 plus E. So in this case, all of A is Z2. It's basically braiding an M around E generated by that. So if that's obedient, this is the same as saying H2 of G on this obedient group. The previous omega is trivial. It's a trivial element of the two cosets. If this automorphism group is obedient. OK, so the prediction is basically in this case, because this coset code is non-trivial, if I condense E, you have to either break TY and break RX. I could show you literally why this works, but maybe I will stop here. So if you're interested, you can ask me. Thank you. Thank you. Thank you very much.
In the context of quantum spin liquids, it is long known that the condensation of fractionalized excitations can inevitably break certain physical symmetries. For example, condensing spinons will usually break spin rotation and time reversal symmetries. We generalize these phenomena to the context of a generic continuous quantum phase transition between symmetry enriched topological orders, driven by anyon condensation. We provide two rules to determine whether a symmetry is enforced to break across an anyon condensation transition or not. Using a dimensional reduction scheme, we establish a mapping between these symmetry-breaking anyon-condensation transitions in two spatial dimensions, and deconfined quantum criticality in one spatial dimension.
10.5446/56103 (DOI)
In my talk, I would like to present you some results that I recently developed in joint work with Alexander Oledich and Martin Zürmauer at the University of Cologne. The topic is, as the title says, bike boundary correspondence in disorder systems, but primarily it's about how to derive topological phases in disorder systems and how to well motivate them from physical principles. And we are not restricting to single particle picture but to, yeah, free fermions, so no interaction is in its talk, but as we don't consider fixed particle number, we have to work in a more general setup of the number space. I'm not sure if anybody knows, so I give a short reminder at the beginning what the number space picture is for many particle systems. And once I've settled this, then I will go over to topological classification in the bike. The essential assumption there will be that my systems are gapped. And once I've established this classification for all similarity classes of the Eutern-Symbar table, I would then go over and do a similar thing on the boundary where, of course, it's essentially that we have a gapless spectrum. And once I've put this to definitions of topological phases on form mathematical grounds, I can formulate the bike boundary correspondence. Okay, so let's start with the Nambu space picture. The fundamental ingredient of my picture is some single particle Hilbert space. In my case, I will only consider solids. It's the usual A2 space over ZD, and I have some finite dimension in a vector space, which represents a local degrees of freedom. All this is complex Hilbert space, and any given ground state or any chemical potential determinants distinction in the Hilbert space into conduction states and valence states. So and a given ground state at zero temperature is then given by the state where all valence states are filled and all conduction states are empty. Now in the many particle picture, divisions from this ground state are described by the Fox space. So here it is essential that we have here the conduction states and the dual of the valence states. This is because we want that the zero degree in this direct sum represents the ground state, and therefore we have to take the holds as the dual space. So it's not just V what's in there. And to further understand this, the meaning of this, we can split this up as written here, and then the P denotes the number of states over the chemical potential and Q denotes the holds below. So the fundamental physical operations I can do on this Fox space now are either insertion of a state over the potential. So this is given by the usual insertion of a state. Of course we are considering fermions, so we have to anti-symmetrize. Or I can insert a hole below the chemical potential and that's the other. And I can do the reverse and remove a state from above or below. So from these fundamental physical operations I can then define the creation and elation operators that probably everybody knows. So these are the creation operators written in these terms and these are the annihilation operators. So note that the annihilation operator indeed annihilates the ground state. So you can check that our beginning that the Fox space is built on V plus plus V minus dual is indeed the correct space to start. And well, so from this picture we have now the creation and annihilation operators and these define the number space. So the number space is defined as the linear vector space spent by all these operators and it can easily be shown that this space is isomorphic to V plus V dual. So by identifying the creation operators with V and the annihilation operators with V dual. And this is a linear C linear isomorphism. So as you might directly see here this number space comes with the canonical real structure which is always there which is a part of the whole conjugation. So H was the real isomorphism from V to V dual and this can always be defined. Furthermore we have the canonical anti-commutations relations on these creation and annihilation operators. For fermions these read as here where the curly bracket denotes the anti-commutator. And these relations induce via these isomorphism symmetric bilinear form on the number space. So in total I got a symmetric bilinear form and the real structure and of course I can then can clearly define in a product by combining these two informations and the take a message from this slide is basically that the number space is a real Hilbert space just by construction. So here's also essential that we are considering fermions because for bosons we would have to replace the anti-commutator by the commutator but this would lead to minus sign here and here and then we don't have a positive definite inner product so that's no longer Hilbert space. So yes? What is the isomorphism? Is it just the obvious? Yes it's from here you can see that this is linear. The argument so you can map this to V but this is anti-linear in the argument because here you have the first isomorphism so I have to identify with the dual in order to get a linear isomorphism. So usually this one is not canonical. What is not canonical? The resismophism is canonical if you have an inner product on a Hilbert space. Yes I need the inner product on the single particle Hilbert space. Yes? Yes there's. Yes it's an even. Why would you get one to odd dimension? No because you said the non-phosphate is a real dual space so you really must have some even dimension, generalized even dimension within the structure. Yes I mean. So you can't get a situation. So the real dimension is even yes. With the complex dimension. Okay so now operators on this number space so general operators in the magnetic particle picture with interactions would just be contained in the Clifford algebra over W as Chris for example considered. So the CRI algebra of course you have to need to have further conditions like even parity and stuff like this and the Q is just the quadratic form that is induced by the car form. But I want I only can consider non-detecting operators and these are generated by these quadratic order operators and one can easily show that these are isomorphic again by linear isomorphism to the linear operators over my number space. And if I only want to consider single particle operators with fixed particle number then I would only allow these two and these would correspond to really indeed linear operators over my single particle space. So then I would recover the usual picture of single particle physics. Okay so the crucial observation is now that time evolution has to be unitary and orthogonal. I think unitary is well established assumption and orthogonal means here that the unitary time evolution has to preserve the anti-commutation relations between the creation and annihilation operators. So that's also a reasonable assumption and together this means that all Hamiltonians that are physically reasonable have to be self-enjoyed and skewsy metric or post differently they have to be self-enjoyed and imaginary with the real structure that is induced on the operators over the number space. So that's now fixed and this is my starting point for my classification. Now in the classification in the bike as I said before I make the fundamental assumption that the bike is kept. So I can consider always the flattened Hamiltonian that squares to one. I have one question there. Yes? Non-interacting setting needs to be enforced already at that level or? In order to consider operators over the number space it's necessary. Unitary evolution needs to be orthogonal. Well the complex over the space would be double-dimensional. Yes. So I was wondering how you were allowed to speak for generic interaction. No, no interaction. This Hamiltonian is in the sense of Hamiltonian before second quantization. Which one? Right. The one on the previous slide. It's a general Hamiltonian over the number space. Yes. That's the one. That's the one. Still state space is some sort of second quantization of number space. Right. But in real representation that's why you... So this observation you cannot move it. After the first thought you need to have a mainstream... Yeah. I didn't thought about this in detail yet. Okay. So the aims of the next section are now to establish first of all systematic description of this order of solid state systems in this number space picture. And then derive a systematic and constructive. That was one of our central motivations to make everything constructive so that in the end you indeed can calculate stuff and do this for a symmetric crisis of the tenfold way. Okay. So in the bike the single particle space as said before is A2 of ZD. So any bravado letters with some internal space. And the corresponding number space is B plus B dual. So I can write it in this form. And similar to myos talk I introduce finite range operators as written here. So it's just finite hopping of operators. And the C star that is generated by all these finite range operators is the mathematically equal to the uniform row C star as observed by Kubota. And I denote this row C star algebra by this type here. Okay. So in principle one can start with this C star algebra but as we learned yesterday the K theory is not very treatable. So we have to go further. And instead of going to the non-uniform row C star algebra it was one possible way I introduce this order in my systems by using the techniques by Belissar and many authors that are here in the audience. And this goes as follows. I consider a disorder configuration space. For simplicity I assume that it's of this form. So omega naught denotes some kind of disorder on each unit cell. And I copy it for each unit cell. So I got a canonical z, d action by translations on this disorder space and for technical reasons I also need a probability measure. But with this I now consider the so-called disorder observables which are defined as all the functions from my disorder space into the set of allowed operators over my letters such that this covariance condition is satisfied. This covariance condition can be thought of as a tautological condition because the disorder breaks the translation symmetry. So if I consider one fixed disorder configuration and shift it this should be the same as shifting the underlying letters. So this has to be satisfied and thus we make this assumption here. And this will be the set of all disorders observables over the bike letters. Now the use of the translations over the letters. So lambda is the, yeah just, yes exactly. Yeah so I translated everything into linear operators over my number space so that I don't have to consider c and c dagger anymore because mathematically it's the same. So that's why I don't use this here. And the name homogeneous disorder can be understood by this observation here that for any given t in my disorder observable algebra I can average over all disorder configurations and this yields a translation invariant operator. So if I go zoom out very largely out of my solid then I would recover a translation invariant system again. So and no disorder is simulated by setting the local disorder to a point. And then this is just translation invariance. Okay so there's a theory in Barrow from 2003 that if I am in the clean case then this c-style algebra is just the usual c-style algebra of translation invariant operators. So I can consider the torus again and do the usual classification. Now in our setting we have a general, more general theory that if I consider any disorder configuration space then I can show that our c-style algebra is isomorphic to the usual c-style algebra, the cross product c-style algebra that is used by Belisar and Fodontovs Byser and many other authors. And this cross product c-style algebra can be considered as the closure of all polynomials of this form. So the u again denote the shifts along the letters and the m is now a disorder valued local parameter. So by these natural physical constructions I in particular showed that I always can write my hematomines in such a polynomial form. Okay. All right. Now if there are no questions to this I will now go on to explain symmetries to you. Of course I want to consider all symmetry classes of the tenfold ways so I have to explain to you what I mean by symmetries. The symmetry we define it as a real so it preserves the structure of my real hibbert space unitary or anti-unitary. This means anti-linear isomorphism that commutes with the Hermitonian. This differs slightly from other definitions of symmetries but we only allow symmetries to commute with Hamiltonians. And in this talk it is sufficient to consider only local symmetries that only act on the internal space so this w. My physical examples of local symmetries are of course time reversal symmetry. Initially it acts only on the single particle hibbert space but by my construction to the number space it lifts to an anti-linear operator over my number space which is of this form. Then I also consider charge number conservation. This corresponds to the symmetry iq where q measures the number the charge is the charge operator. So q is 1 on particles and minus 1 on holes because they have opposite charge. And I have to insert the i here because I demand that all operators are real or symmetries but of course my real symmetry comes from part of your whole conjugation and charge anti-commutes with part of your whole conjugation because holes have opposite charge than particles. And another one is particle hole transformation so this is an anti-linear operator that almost looks like my particle hibbert conjugation but it is enriched by some unitary which acts on a single particle hibbert space and squares to one. Okay, these are some examples and now in order to perform a classification I could in general try to classify all flattened Hamiltonians in age but from a mathematical point of view these is a bit difficult because the Hamiltonians are imaginary so it's quite unnatural to consider them. Instead it's much more practical to consider the corresponding real operators which we call quasi-particle vacua which is just an i in front of the age in the minus sign of convention. So then of course I have to translate Hamiltonians with these local symmetries into quasi-particle vacua with symmetries and these symmetries are called pseudosymmetries which is introduced by Kennedy and Zambor. And these are defined as follows, a quasi-particle vacuum here the Riemann under what it contains as algebraic conditions has r positive and s negative pseudosymmetries if there's a clifford action into my algebra such that I have r operators that are self-adjoint and square to 1 and s operators that are skew adjoint and square to minus 1 and all of these anti-commute with my j. So the small k and small j denote the real generators of my clifford algebra. Okay, and my claim is that all symmetry classes of the tenfold way can be written, can be represented in this form. So just an QPV that represents the ground state plus a clifford action. And that's what I want to explain now. So let's consider only the real symmetry classes, the complex symmetry classes are already explained in very detail in the book by Proudhan and Schultz-Biles for example so I won't consider them here. So let's start with class D, class D corresponds physically to a superconductor with no symmetries so I also got no pseudosymmetries and that's it. So class D3 corresponds to a superconductor with time reversal symmetry and time reversal symmetry as explained is an antilinear isomorphism on W which commutes with my h but is antilinear therefore it anti-commits with j which is just i times h. So I can construct exactly one pseudosymmetry by this here. Okay, so the next class is A2, A2 is a topology insulator because I have charged conservation with time reversal symmetry. And there I can again define this one but furthermore I can define another one by defining J2 equal i gamma, gamma was the real structure on the number space times t times q and one can verify that this indeed anti-commits with j and squares to minus one and the skew here mission. Okay so the next is a topology insulator with time reversal symmetry and a particular Holtz transformation one may also call it a chiral symmetry and then one can do similar steps and define a third pseudosymmetry and so on and so forth. Maybe class, I also mentioned class C because it's not the usual form on encounters in the literature for the symmetry classes. So class C is a superconductor, so no charge conservation with spin rotation symmetry. Spin rotations are generated by three operators corresponding to the polymetrices but again I have to put an i here in order to make them real and they commute with h if my hematoma is spin symmetric. And then I have to go to two by two matrices in order to define four pseudosymmetries and these are then of this form and my j has this form and yeah. H and minus h yeah. Okay so that's a little bit different but the other cases are then formed in a similar way just by defining the pseudosymmetries that I explained before in this block diagonal form. So and these are all eight classes then translated to QPV with species symmetries. Okay so now after all my physical input is some QPV and some input of symmetries that's all and I have a of course a fixed index of positive and negative species symmetries and now I want to consider the set of all allowed QPV such that this is a symmetric QPV. And I start my classification by considering all homotopy equivalence classes in this set of allowed ground states or QPVs. If I only consider homotopy theory I could stop here and just calculate this set but I want to do k theory and therefore I want to stabilize and in the first step I then explain what I mean by QPVs over matrix algebras over a so if I get more bands in my system then I define it in this way where phi n denotes the direct sum so the diagonal action of phi and then of course I can consider homotopy equivalence classes on this larger algebra. Now in order to compare QPVs in different matrix algebras I have to somehow explain how I do this and this is done by fixing some ground state or QPV which I call topological trivial. I have to fix this and then I can say okay my QPV in my smaller algebra is the same as the QPV in a larger algebra if it's just a direct sum with some topological trivial summand. And this forms an inductive system and then if I go to n to infinity so I allow any degrees of freedom added then I finally obtain a set of stable homotopy equivalence classes and then my set of topological phases is defined as the set of all topological phases I defined in this way but always relative to each other. This means I only can tell you what are topological phases relative to each other. I have no absolute notion of topological phase. This is only fixed by this J-Ref. Okay and then okay maybe I just this is a bit technical I will skip this. The only take home message of this is this physical motivated set of topological phases can be identified with van Dalles K theory where van Dalles K theory is built upon odd self adjoint unitary matrices but since I have not much time I will skip this here and just tell you that I can identify my group of topological phases with K theory via this identification. So in the end I obtained a canonical definition of topological phases for some QPV relative to some reference QPV in this way. And yeah okay so after all I have a new motivation for the usual cross product C star algebra as the algebra of observables and yeah my definition of a topological phase only depends on R and S so maybe on my phi here so it doesn't distinguish between the different symmetry classes. And this is a bit hard to understand at the beginning that I have to fix a trivial topological phase but it's like this and I can identify with K R theory. Alright and as an advertisement with the systematic constructions I also can now define systematic sets of indices. So I obtained that my topological phases live in such a K theory group here and for any given index R, S I can define an index so this is only a Z-valued index by this formula so it's an adapted formula for the complex cases it's a so-called conjourn sorry character. So this is just an advertisement it's not possible to explain it in a way that it's completely understandable but so just yeah. Not yet. Yes. You're fine in K R, it's just how you get a number. Is this pairing only these non-torsion invariants? Yeah currently I'm trying to use Kehl-Endon's results in order to also derive formulas for the torsion values but that's a bit more technical and more difficult. Yeah. This these J's? Yes. So you always insist that they must be from the algebra. Yes. But they could be the automorphism group. And it's not possible in general to directly identify them with K R theory I think. So yeah my assumptions are essentially based on this because yeah. I mean you're constructing the J-1 to J-S from the symmetry operators so I mean they're just in the algebra. I think they're even point wise right so they were just a matrix algebra and then everything is in a any way. Yeah right in this case yes. Not local symmetry where restructure is mixing things at different places. If you really want it. Well I could extend my notions to non-point wise things but in the end I have here a projection so this is a corner algebra and what I need to in order to identify with the K theory of this inner algebra I need to have that the K theories are the same. So for this I'm not sure how this works for outer symmetries. But I think that you need to extend this one on the fox base. Okay so now shortly the top of the classification at the boundary. So I introduce a boundary as done often in the literature by just considering an infinite boundary perpendicular to one translation and direction. So in this way I obtain the half space letters which is just the right hand letters here and the boundary letters is still ZD so it's all respects the translation and directions in the ZD minus one direction and the primary difference to the bike is that my shift operators perpendicular to the boundary are no longer unitaries because of course if I shift something over this boundary I map it to zero. So I obtain this condition here where P0 is the projection on this edge here this boundary. But besides this fact I can now define the algebra absorber with over this half space letters in the same way as in the bike again I have this covariance condition and here I put in the half space letters and the same for the boundary. Yeah there again I don't have a head because all shifts in this direction are still unitary. But basically the definitions are the same and I can show that the half space algebra is now isomorphic to the cross-product C star algebra with the semi-group ZD minus one times N and this can be considered as the closure of all non-commutative polynomials of this form where this pre-factor is now in the boundary algebra. So all shifts are contained in here with a parallel to the boundary. Alright so now what's with the QPVs? Yeah first of all I want to explain to you the canonical morphism from the half space to the bike this is important to distinguish in the end between boundary observables and bike observables in the half space. So and this is done by a bike projection from a half space which just maps the half space translation operators onto the bike translation operators and in this way I obtain a set of boundary observables that are localized at the boundary and these are defined as all operators that are in the kernel of my bike projection. So all observables that are vanish if I go into the bike have to be at the boundary. So and this is the idea that is generated by the boundary projection and this can be shown to be isomorphic to this C star algebra. Okay and as I said here this is the observation physically and I can put this information in a shorted sequence so these are the boundaries this can be identified with some set in the half space and on the right hand side I got the bike and these all together from the half space. I don't think you define what this A delta actually is. A hit? No A delta. I mean I know that you say that it's isomorphic to this kind of thing. This one here. Okay so this is the D-dimensional version of the bike I just showed you. Bike algebra. No. Yes here it's important that the symmetries are on site. Yes. Yes indeed. So the symmetries are on site but the J is not on site so the J cannot be unitary in general anymore. So of course I have gap less boundary states. There's no gap anymore. So what I only can demand is that when I project into the bike then I have a gap operator. That's my definition of a QPV with boundary. So that when I project in the bike I recover a QPV. And then I can follow by general properties of my boundary projection row that J has to be a QPV so it has to be squared to minus one up to boundary states. And this is the operator encoding of gap less boundary states. And the unitary star morphism since it's act locally is still a unitary star morphism now in the half space algebra. But I cannot demand anymore that it's exactly anti commutes with J because yeah this does not have to be satisfied in general from an algebraic point of view. And there's a canonical association between bike QPVs and half space QPVs by a section of this short exact sequence. Now I would like to establish some kind of topological classification. So some kind of homotopy equivalence between QPVs with boundary. But I have in my mind that I want to classify the properties at the boundary. So if I just consider a homotopy of operators in a head this would still contain some informations about the bike because the half space contains both the boundary and the bike. So I have to somehow lose my conditions on my homotopy. And this is possible by going over to the so called multiplier algebra over my localized boundary states. And this can be defined as all operators over the half space. Here I still have the disorder in this picture that preserve the boundary states. So this means that this T is times some operator in this boundary states. This is still a boundary states. In particular these are still covariant with respect to the translations parallel to the boundary. In other words this means that I preserve the covariant conditions parallel to the boundary. And I no longer demand that my operators in this algebra are covariant with respect to the perpendicular translations. So in my opinion this is a reasonable assumption because a boundary destroys translation invariant by construction. So it doesn't make any sense to assume this covariance if I'm interested in properties at the boundary. So that's why we dropped this condition. And furthermore there's an integrity, more stomaphism from my general algebra into the larger algebra. And yeah, so this now generalizes the QPV with boundary to any set of two builds where F is now satisfies the same conditions as J. But the only difference is that it's now an element of the multiplier algebra. And my phi now also maps into the multiplier algebra but I still have the same conditions as before. And I will call such a topological trivial, oh there's a typy here, there should be F's here. Sorry for that. I call such a trivial if all of these conditions are equal to 0 of these quantities. So if I still got exact symmetries my J hat is still skew Hermitian but crucially my J hat squares to minus 1. And as we learned before, if J hat squares to minus 1 this means that there are no gap plus boundary states. So from a physical point of view this is a natural notion of topological trivial phase. And under those by definition this set defines a K group by modding out these topological trivial phases and considering only homotopy equivalence classes of these two builds. So in this way I have a second picture of topological phases which is again a K group but now from a completely different point of view. And to summarize now the boundary topological phase of any disorder topological insulators or superconductor is defined as the equivalence class that is generated in this set of topological phases at the boundary. And some algebraic properties that follow by these constructions are for example that I always can demand that my symmetries are again preserved exactly so I can demand that my symmetries are still anti commuting with J hat and I also can demand that my J hat is skew Hermitian. And so the only fact that prevents topological phase from being trivial is that it's not in general squares to minus one and if it does then it's trivial. And as I said before the section of my boundary Schottx sequence associates to any given bike QPV a canonical boundary QPV with boundary. And to summarize from a physical point of view two boundary topological phases are equal if and only if there's a path in this multiplier algebra such that these are two are equal up to trivial phases. And furthermore a boundary topological phase is trivial if and only if there are no stable stable in the sense of homotopy classes, Gapless boundary states. So yes that's another observation here the reference is already there it's the one where there are no Gapless boundary states. Why? I don't quite get the picture. Yes. But these are two are private two independent half spaces. So. So there is the folding trick where you fold it and then you reuse interface the backroom. Okay. Okay. How much time? Yeah, I can either make a short or I mean I already have explained some to some extent the topological phases and now the bike boundary constant is quite easy to construct because yeah from a mathematical point of view the bike boundary constant is established on the K theory. I have already introduced the bike boundary short exact sequence here as a reminder this is the set of observables that are localized at the boundary half space and the bike and this induces always a long exact sequence on the K theory which is of this form but the only relevant morphism for us is the one that is here diagonal. It's the one that maps bike topological phases onto boundary topological phases and I can interpret it like this because I have explicitly identified my topological phases with K R theory groups and thus I immediately get a bike boundary cosponence by this morphism which is constructed from abstract mathematical concepts. And now the main result of our work is that this bike boundary morphism maps topological phase in the bike which is measured with respect to some J ref onto the boundary topological phase and again here occurs the reference KPV. This might surprise you a bit but if this J ref is for example an atomic limit topological insulator or superconductor then this phase is trivial so then indeed I only get this class here so what one actually should expect but of course if I consider here any possible difference of two KPVs then the difference has to be observed at the boundary too. And these J heads are the canonical lifts from my bike to my half space. Okay so some, finally some physical properties. The image of the bike boundary cross bondence is equal to the set of all boundary topological phases if my omega naught is contractable but this doesn't have to be true in non-node more general settings of this order. Or stated differently all boundary topological phases are corresponding to some bike topological phases so I cannot just change the boundary without changing the bike. So one physical formulation of bike boundary cross bondence. And another point is given a fixed bike class the boundary class depends on the direction of the boundary. Probably most of you will notice but for me it was a bit surprising that you measure different topological phases depending on which phase you cut your system in. It's not all the same. It's only the same for the strong topological invariant. This is the same on all boundaries up to a sign which depends on some orientation of your space but this is seen on all phases and this also applies that if I got a non-trivial strong topological invariant in the bike then all boundaries have gapless localized states which cannot be destroyed by local. Well the bike is gapped so I cannot move any gapless state with small amount of energy into the bike. So the mathematical statement is I have always states in the gap. Right? But they don't specularly localize that. Localized. So that's what you mean right? They're very much stretched along the bike. They're very non-local. Localization is still there. Yeah okay. They're transversely localized. Yeah the mathematical assumption was in the bike I have such a gap spectrum and the statement is if I introduce a boundary then I get some gapless spectrum and since I know it cannot be in the bike it has to be at a boundary. That's how I think about it. Yeah. Yeah? To make a statement that this is what we call the spectral statement and there is a dynamical statement on top of this which is that these gapless boundary states do not understand localize. What we found is for sure you need pairing to go all the way into index theory to resolve the second layer of the problem. Okay. Which is the dynamical statement. Okay. So that's why the word for your harness is pairing is quite important. Okay yeah. But yeah. Are you finished with the work yet? Yeah. So let's give it a hand. Yeah. Thank you.
Guided by the many-particle quantum theory of interacting systems, we develop a uniform classification scheme for topological phases of disordered gapped free fermions, encompassing all symmetry classes of the Tenfold Way. We apply this scheme to give a mathematically rigorous proof of bulk-boundary correspondence. To that end, we construct real C^\ast-algebras harbouring the bulk and boundary data of disordered free-fermion ground states. These we connect by a natural bulk-to-boundary short exact sequence, realising the bulk system as a quotient of the half-space theory modulo boundary contributions. To every ground state, we attach two classes in different pictures of real operator K-theory (or KR-theory): a bulk class, using Van Daele's picture, along with a boundary class in Kasparov's Fredholm picture. We then show that the connecting map for the bulk-to-boundary sequence maps these KR-theory classes to each other.
10.5446/56184 (DOI)
So, as Valentina just mentioned, I will be talking about a project I conducted during while I was doing my API fellowship right before mood started actually. All right, so a very brief reminder about tick-borne encephalitis. So it's a virus of the flavivirus family. The main mode of transmission is vector-borne and there is also transmission through non-pasteurized milk products such as excellent cheese, but it's way less common and especially way less common in a country like Finland where they do not consume unpasteurized cheese. So the main vector are ticks and they can transmit the diseases at each of their developmental stages through blood meals. Tickets can be birds or mammals including cervids or the rodents or animals that can be hunted in Finland. So TB infection course, so TB infection is asymptomatic in most cases and that's something we need to remember once in a while when you look at surveillance data. And the course of the infection depends on the viral subtype. We have two that are present in Finland, the European virus subtype with a biphasic infection starting with a flu-like syndrome, menangoonphalitis in 20 to 30% of those that will have those flu-like symptoms and residual sequelae in half the ones that will have menangoonphalitis. Case fatality rate below 2%. The Siberian virus subtype has a clinical course that's quite different and this is why I was mentioning that when we had the TB case to be meeting a few days ago, you will have direct neurological signs like focal encephalitis menangitis and 80% complete recovery with a case fatality rate that's quite close to the one with the European virus subtype. So once again, TB infection is asymptomatic in most cases. In Finland, our incidence was around one case for 100,000 at the time I conducted that project with 60 to 70 cases per year between 1995 and 2013. When I mean 60 to 70 cases per year, I'm talking about laboratory confirmed cases. It's a focal disease and several emergent TB forci were identified in the past years. We have laboratory-based enhanced surveillance established since 1995. TB prevention, it's quite straightforward. You avoid tick bites and if you're bitten by a tick, you remove it quite soon. So hence the importance of tick checks and we recommend wearing protective clothing during outdoor activities. There is also the possibility to receive TB vaccination with a booster every three to five years and this is where we have different recommendations per regions in Finland based on some cost-effectiveness calculation and five years moving average incidence calculation. This is an example of how TB immunization was back in 2017 when I started working into that program. Depending on the area and depending on your activities, permanent and summer residents get TB vaccine recommended for them. So for example, if you live in an area with very, very, with one to five cases and if you have no outdoor activity, you would not be recommended to be vaccinated. If you're a forest worker, you would get the vaccine recommended for you. So these are the vaccine program requires updates almost yearly and this is some of the work that Hannah, my sidekick, even though she hates that term, works on quite often to work on which areas must the vaccine program be expanded or not. And which is also important for us within THL to work on some communication regarding the fact that there is a misunderstanding between TBE and ticks. We have quite a weird knowledge amongst people in Finland that believe that the TB vaccine will protect you against tick bite, where we'll also protect you against Borrelia. It only protects you against TB and that's why we have to develop that kind of infographs. We also have a dashboard because I mean dashboard at trendy these days. This is hosted by THL where you have the five years rolling incidence. We have the areas where you have the recommendation and the number of cases per areas and stuff like that. But once again, keep in mind, this is only based on lab confirmed cases. The aim of that project was to describe TB epidemiology, assess the national and regional incidence TB increase over time and investigate the effects of climatic and ecologic factors on the emerging trends. We conducted what can be considered and what should be considered as very basic time series analysis. This was not rocket science stats. We used the data that was available. For TB cases, we had registers, so lab lab based national infectious disease register data where we had date of sampling and date of onset. We imputed the date of exposure as the date of onset minus two weeks or the date of sampling minus three weeks if we didn't have the date of onset. The place of exposure was more or less detailed. It was collected through cases interviews and we sometimes could pinpoint it to municipalities, but most of the time we would only have some wider regions or wider areas. Now we wanted to look into animal density. The only data we could get was hunting data through LUCKE, which is the game management and forestry agency for Finland. We could get the numbers of animals that were hunted in each game management areas and we could get the following animal. Mooses, several types of deer, has and foxes. The time unit we had was only year and the geographical unit were game management area, which as you can see from this map are very large areas, which will lead to some dilution of the effects. Another important point is that we couldn't get data for the island archipelago, which has some autonomy with Finland, but is also a TB hotspot that we would have been very interested in including in that time series analysis. Regarding weather data and things got even more complicated, we had to use open data that was available and made available to us by the National Oceanic and Atmospheric Administration. We had daily records from several weather stations per game management areas. I had played around with QGIS and this is where the weather stations were. We had several of them scattered around different game management areas and unfortunately several interesting measures could not be used, for example rainfall or snow cover because they were not collected in a systematic way in all of these weather stations. We only played around with temperatures, minimum, mean and maximum temperature recorded, average daily variation, proportion of days in a month with a mean temperature above 5 degrees Celsius and mean temperature surplus. In order to be able to input that in our model knowing that game management area would be our geographical unit, we had to calculate an average temperature value per station per month which was our time unit then per game management areas. So averages of averages of averages. Now, stat wise, we calculated TB incidence trends and we modeled, we used, and to run our model we used a mixed effects negative binomial regression with the number of cases as a dependent variable, time units as year as an explanatory variable and a random effect on game management areas. We also modeled ecological drivers with a mixed effects negative binomial regression and then our time units were year month, explanatory variables were temperature data and hunting data and once again that random effect on the game management areas. So first, oh, well, this is something I had to get for an open science event. I think I don't need to go through that. This was a more easy explanation. Anyway, so regarding TB EPI, we had 488 cases reported between 2007 and 2017 with a median yearly number of cases that were 43, median age 53 years old and when we looked further into the data we had 28 cases exposed abroad. So we excluded them and then 65 cases with missing place of exposure and or sampling date, which is quite sad because then we only had 395 cases we could include in the EPI and TSA analysis. Now TB increase over time in Finland, that's quite straightforward that there was quite a strong increase. We could say that it increased based on the model by approximately 15% per year with strong regional variations and don't throw rocks at me. I did use STATA for that project. Now regarding the weather data modeling, we conducted a single variable analysis for weather parameters with a one month flag that we adjusted for time for a 12 month periodicity with a random effect on the game management area. We didn't find anything that would be very consistent and then we chose to keep the minimum temperature as the weather parameter that we would keep as an adjustment variable in the other models. When we looked at the hunting data, we had the single variable analysis of the game data per 100 units, otherwise it would have made no sense to show you per number of animals. We adjusted for the average minimum temperature with a one month flag, the 12 month periodicity once again and that random effect on the game management area. We had nothing consistent that came out from the single variable analysis and we then used stepwise forward modeling and we found out in the end that on a monthly that we found out in the end that the number of white tailed deer that had been hunted was significantly associated and positively associated with the TB incidence increase while it was going the other way around for roe deer. Now just a little something, a little particularity about the white tailed deer. So that's the one you see in Bambi. It comes actually from Virginia and it's not an autoctonous species in Europe. It was introduced in Europe in the 19th century in the Czech Republic where TB is also endemic as well as in Finland in the 1930s. And quite funnily I forgot to put the reference of the newspaper article about that but the number of white tailed deer in Finland grew from just five individuals that were brought in in the 1930s to her hundred thousand over 80 years. The question that still remains for us is whether it would have a stronger participation to be spread in Finland, something that we unfortunately could not have a fine and detailed grasp of because of the strong delusion of all those parameters. To conclude, there's a clear nationwide TB increase with major regional variations and there's an influence of certain surveyed species on TB incidence. The limitations are that we had to use a proxy for animal density, that we didn't have data for the island archipelago, the large geographical units is a limitation I've already mentioned and we had short time series. And as you can see, I also went to do a little bit of tick hunting in Turku during that project but that was just for fun, not as serious as what Karin's team has been doing. And because it's interesting to have a look again at what's happened since then, we had a record year in 2020 and again in 2021 with 151 cases, lab confirmed cases. Our evidence surveillance had no capacity to collect proper information and timely until we rang the signal and we're still puzzled by why we would see such an increase. Is it because things traveled less? Is it because they've spent more time outdoor because of COVID-19 restrictions and recommendation to see people outdoor instead of seeing them at your place inside? Is it because we had a milder weather which could have fueled TB transmission and on the other hand, fueled the fact that people went outdoor more? That's questions that are open for us. If you want to go further, the paper was published in emerging infectious disease article almost a year and a half ago and there's also the mood TB case study. And as you can see, I really like QR codes these days. That's a trendy thing too. Many thanks for your attention. I hope you have some questions and I hope that in a near future, when somebody else will try to do a similar project, the mood data packages on whether or animals will be easily accessible and easy to manage so that they don't have to go through the QGIS playing around or banging their head against walls due to their way too large geographical units. Thank you guys. We have the first question from really not you use the weather station data. Could satellite data have worked too? You want the honest answer, Willy? I don't know. It did not cross my mind to use satellite data back then. I was young and stupid. Well I'm old and stupid. So it doesn't occur to me to use weather station data because as you say, it's very especially the finished stuff which I've had caused to play with and the Swedish stuff I've had caused to play with before. It's very complex. So just to try and bring your fears to make them less, I mean the mood project will be providing lots of satellite data not on the whole weather station data. Or if it is weather station data, it will be heavily massaged and will make it available probably. That's awesome. I already knew but… Insha'Allah. Insha'Allah. We have a daily… Sorry, there's one other thing, Tom, just for I forget. We're just in the process of producing a new dataset that hasn't been done before on day length, photo period. And people have been beginning to tell me that that site is a useful thing as a way of showing the beginning of the season in conjunction with temperature. And I'm wondering whether you would agree with Ash and whether it's worth putting a base to that in too. You need to ask real researchers about that. I mean I would be interested in the daylight stuff for as a proxy for possibly more time spent outdoor. But that would be… I would have… I would have… If I had had it, I would have been curious in adding it to my model. Okay. For fun. Yeah, sure. I will. I mean you'll have it next week if you want. Thanks very much. More questions in the chat. So one from Ana Paula. Host density data are very rare. How we should cope with this limitation? And one from BIM that I can ask later. How should we cope with that limitation? I don't know. I don't know because we had… I mean when we decided to use the game management areas data, I found some references saying that it was quite a decent proxy. The problem is the size of the game management areas. If so, I don't really know how we could cope with that. And there is now a question from BIM. What about impact of vaccination on your data? So this is… In fact, Tim, I have a bit of a question on your tick data, tick-borne interferitis data. Is this all tick-borne interferitis including the severe and the more mild forms? Or is it only the severe forms that are detected by your system? And then on the vaccination as such, I suppose that in some areas you will have quite some vaccination coverage, which will influence of course your data. And how was this taken into the account in your model? So for the TB cases, it's all cases that has been lab confirmed. So it could be the flu-like, viremic patients and it could also be the way more severe. But it's based on a passive system, people presenting themselves at the health service? Yes. And regarding the TB vaccine coverage, this is another problem that we could not… We do not have coverage data per areas that much because the recommendation is based… is not only based on permanent residents but also on summer residents. So we do not have the ability to say whether in this area the summer residents have this good or this bad coverage. And you have to keep in mind that everything has a cabin in the forest. Yeah, exactly. So we could not use it, but on the other hand, the recommendation has not evolved gone from black to white or white to black. So the recommendation was quite similar in most of the areas over time. Okay. Yeah. Thanks. Timothe, I had a question about the land cover. Have you tried also sub-setting this polygons you have? So you only use the land cover classes that are related? Well, that's a bit out of my league stat analysis-wise. Okay. Because I imagine this disease doesn't come in some land cover classes, like urban areas or I don't know. So the thing is, and Henak can add some details about that, but we now also have TB in more urban areas like parks. Okay. And there is a discussion, I think, going on in the chat and there is one question about TB cases and there is a comment from Mariana. Thank you, Tim. Excellent presentation. The increase of cases is probably very multifactorial. TB has been discussed more in Finland during the recent years, which leads also to more doctors to think about TB and take specific tests. It is possible that increased awareness leads to more cases being caught out here. Are there any more questions that we can answer? Yeah, I just have a question on your last slide on the DBE increase. If I'm not wrong, it was not only seen in Finland, I think. So it's probably also, if I remember well, that in 2020, that there were other countries where you have also this increase, isn't it? Austria for sure reported that and other countries, but I would not be able to say which one. Yeah, which might indicate that it is something, that's a bigger picture. But yeah, COVID is still not excluded from that explanation. Yeah. I have a question. In Finland, they are very advanced with technology and so do you think that, you know, a citizen science project to simulate people to provide information to apps, for example, regarding the situation with people in the fall like this or other disease, providing more detailed information on the site, on the situation when they got infected, would work or there are too much information for them and so people do not participate. So if Hena is still here, maybe she could say a few words about the interview of TBE cases that she is leading. Yeah, of course. Sorry, I missed the question a little bit, but I can tell some basic stuff about the interviews. Basically, we get all the patients from the register and we get their patient, you know, all the information that they have. Sorry, now I'm struggling to find the words. So they go to doctors and doctors right there like patient reports. So we get all this information and we find some relevant information from there, but we also call the patients and ask them about the symptoms where they started, where did they go during the specific time before the symptoms started and what they think is the most likely place of exposure and did they find any ticks and also if they are aware that they had a tick, which is not with all the cases, of course, then we ask, like, could you be more specific and do you know, like, do you have an address or can you pinpoint a location and then what we do is that we put a point information on map, which tells the most likely place of exposure and talking about last year's cases, we got this information approximately for 90% of the cases. So I think that's very high quality information that we have and it supports very well when we extend the vaccination program to further areas. I hope this answered to your questions. I would be happy to give more details if needed. So I would need to check, but I think that we've seen similar slow but steady increases in other European areas. In Finland, we've also seen that TBE was expanding towards the north of the country, where it was more of a southern disease initially, it has expanded more towards the north. It is possible, I mean, it is possible that it might be quite multifactorial, that a milder weather makes the TBE season longer and also makes the, so a longer TBE season and also more time spent outdoors due to a milder weather. So that's what I would, and I think it's perfectly plausible to say that global warming has an effect on the spread and circulation of TBE. And there's been some work conducted by geographers on suitability of areas depending on forecasts of weather in the next 10 years, 20 years, whether the some areas would be more suitable for ticks and TBE circulation, depending on how things evolve. So yes, one of the questions I've been asking myself about TBE in Finland lately is whether it is still worth it to have that enhanced surveillance, because this is something we can have someone call patients and do interviews when the number of cases is still limited. But when we have 150 cases over a year, it's a huge amount of work and we cannot have somebody doing this full time for a huge amount of time. We need to think about a different surveillance system, and we need to think whether it is still valuable to collect the detailed information on place of exposure. Until now, it appears that it's still useful to guide the vaccine recommendation because of the very specific areas. But at one point we might need to reconsider whether we should not have more blanket vaccine recommendations. It's not now, it's not today, but this is something that might have to come up in the conversation within the coming five or ten years.
Tick-borne encephalitis (TBE) has become a growing public health challenge in Europe and other parts of the world. The number of human cases of TBE in all endemic regions of Europe has increased by almost 400% in the last 30 years; with spreading risk areas and new foci across Europe and worldwide, TBE has been included as one of the biggest health threats arising from environmental change. Novel assessment and monitoring strategies are therefore needed to face current and future outbreaks. For our first MOOD science webinar of 2022, we invited Timothee Dub, Research Manager in the department of Health security of the Finnish Institute for Health and Welfare (THL) to discuss the results of his paper published in Emerging Infectious Disease ‘Game Animal Density, Climate, and Tick-Borne Encephalitis in Finland, 2007–2017’.
10.5446/56105 (DOI)
What I hope to share with my previous presenters is some of the ability to take obscure ideas and build them into existence. I think that is something we all kind of share, but on various levels. So I'm very happy to be in the network from the very beginning. And from the very beginning, I followed an interest in using felt and the techniques of producing felt in the context of architecture and design of some sort. I must say that's only, let's say, one of the interests we follow at the institute, but I'm happy that it kind of stayed with us over the last four or five years, but always on a kind of small scale niche level. So that is not, now I give a full presentation already. Sorry. So needle felting is a technique where you could almost use any fibrous material and you repetitively move a needle with little hooks through that material and that lets the fibers intertwine and form a stronger pack. So there are different ways of felting. There is also wet felting, which only works with certain materials, but we use needle felting. And what we started to envision in the beginning saying felting, needle felting could be used as another way of 3D printing. So rather than having a material that gets extruded and bonds in various ways, either for sintering, chemically and so on, we use the process of needle felting. And initially we just mimic what's been done kind of by hobbyists anyway. We produce forms by needle felting manually. So we say we can actually do any object and they will be fairly stable by using nothing but fibers. But we know fibers, they don't really kind of do a lot of work. They can just do not enough work to support themselves. How to bring that into kind of architecture and design and make them more structural, use them as composites. So that was basically the thesis we started out with saying, how do we use the old technique, the kind of manual technique, the hobbyist technique of felting fibers to turn them into composites. Obviously, there was some work already done before, which we kind of built on the idea that the Disney Research Group proved that felting technique could be used as a 3D printing technique. They did slightly differently than we, they used a threaded material which makes it way more easier to use the notion of slicing a linear process and so on. What we did then as a kind of first kind of proof of using these felted structures as composites just tried all sorts of matrix materials from glue, hairspray, various cement bound materials and so on to infiltrate the matrix, the felt and then got kind of composite. Another path we took saying, if we really want to kind of have a fully integrated and multifunctional approach to that, how do we integrate other mechanical elements into the process and one way we did is saying we can use almost any shape coming from a different process as long as it ends in some sort of a kind of entangled mesh like structure, we can just felt around that and integrate that. There was a student project where they used this kind of 3D printed traditional elements like stairs, windows and so on and you see they have this kind of mesh attention to extension to it and by just putting the felt around that they fully integrate. So for the ISS in 2018, we had a kind of first step at developing a robotic process of producing some shapes with it and it turned out to be kind of really cumbersome once you leave the known process of a kind of linear extrusion and leave the process that Disney laid out with a linear layout of a threat and so on. But that was what we kind of proposed, we say we can produce any sort of shape in felt and we can then infiltrate that shape with a matrix, in this case with UHPC and we can either use elements such as wall sections or elements such as connectors. So and there's possibilities although it's a kind of massive leap to go to such a large scale with this kind of technique which is usually used by hobbyists. So we developed a series of kind of effectors, we use actually hobbyist tool which does a kind of high frequency kind of needle felting and then had one more axis that kind of oscillated and then kind of applied that to the path. So you lay out the material as a fleece and then basically the density in the end of the piece is somehow linear to the number of paths you go over the material. So you can densify the material just by kind of going there more often so it compacts also. There is going to be done a lot of research in this kind of nonlinear process is how it kind of builds up. But what we kind of stated here as a thesis is that in one piece which is kind of wall section here you could have parts that are structurally necessary, they're kind of self-supportive, they can kind of get infiltrated with some sort of cement matrix but you can also include your installation in there. You could also include your finishing layer in there. So it's basically the same composite in different densities and different mixtures that you use with that process. And with this series of tests which was actually the most difficult one getting the right balance between which parts are highly dense fibers and which parts are fairly loose fibers so that enough material can get into the fibers and so on. So we used different materials for that different material mixes and yeah, it is always a balance of having a relatively dense core of felt and then having a rather loose outer layer where you get a good kind of ratio of fibers and matrix in the mix. So going from these kind of abstract examples we had the opportunity to work on one competition where I said what kind of project actually demonstrates the full potential for this kind of functionally grading material going from hard structural self-supportive in the beginning to something that is even super soft like this seating surface. That was only the visual, the idea and then we later the opportunity to actually only the opportunity we did exactly that we kind of build it into existence. I said let's just do it and we took not the Pantone chair but even the cheaper chair we could talk about IKEA chair mainly for the purpose because that also included the kind of mechanical fitting there. So that sits on this little, this little is kind of bolted with free bolts to the shell of the chair. And so that's what we did. We simply just laid out in the beginning really fluffy piece of felt, densified it more and more, kind of put in these ready-made felt spheres and they're not glued in there, it's the same process. You just use the needle again and you connect them. And then we had these kind of 3D printed parts where you see the connector in there. So that was kind of embedded on the underside and then the entire thing was impregnated with concrete. So you get this kind of really thin layer that has some reinforcement in there. And yes, you can put a child on it, see? But you know where the cracks are going to be if one of us sits there. So that's basically the kind of fast forward of what we did over the last kind of four years with this kind of niche interest in using felt concrete. And it is still interesting enough for us to kind of stay with that, although as I kind of just give a few hints of where also the really kind of great challenges are with using that kind of techniques. But it's what kind of is really great for me to see that it needs this kind of balance of doing kind of engineering-like research in a small scale, but then projecting that always in some sort of almost make-believe projects where you kind of convince yourself that that actually has some mileage and is not just some kind of obscure interest. That's my presentation so far. I'm happy to take any questions here, but also to kind of go for the coffee break and then we're back in time here. Thank you. Thank you, Michael. I really love the experimental approach to it and then to see what can emerge from these experiments that might sound a little bit crazy in the beginning. I actually, when I was working on my own PhD, I was always taking K. Luigi Navi as a reference to my work of this functional standpoint concrete. And he, funnily enough, his first patent was called Ferrocementitious Felt, where he really took Felt as the methodological reference to bringing stiffness by or strength through form, what Philip mentioned earlier. Because he was saying that if you have these many layers of intertwined reinforcement that he used for Ferrocement, it actually, and then if you infiltrate it with concrete, you actually do what headmakers get. They have a very soft material and then just by shaping it, it becomes very strong. I really like how this is a very literal translation of this principle here. Also in Etude Van Schoep, we have an artist who is working with Felt and with Wul. And she's using it in combination with clay structures. That's also very promising because she's looking into all these building physical properties and performance of this composite, which might also be interesting to you to have another more sustainable matrix. Yes. Thank you.
The paper explores the possibilities of a composite of needle-felt fibres and cement-based matrix for designing and fabricating architectural elements. It summarizes the longer standing design-research undertaking in needle-felting as a novel method for creating materially graded objects without the use of formwork. The work is conducted on a design prototype of a chair to emphasise the integrative nature of this material system. This approach allows the study of all aspects from the fabrication process, the function of details, structural behaviour, and user experience. Needle fleeting has a long tradition in arts and craft for manually forming intricate objects such as figurines or applying ornaments to textile objects. The main feature of felt material is that it can bring fibrous material into a stable shape. This process has been digitized by Disney Research proposing needle felting as a way of 3D-printing fibrous material [1]. Becker et al. also proposed a robotic process for robotically 3D-printing felt shapes [2] which form the base for a material composite of felt-fibers and a cement based matrix. Their paper argues for functionally graded architectural elements where the composite of felt and cement can provide for tensile reinforcement, compressive strength, insulation, and different haptic surfaces. Based these methods here a replica of IKEA® Leifarne chair is realized, showcasing a range of felt-concrete composite applications integrated in a single object.
10.5446/56107 (DOI)
Hello everyone, it's very nice to be here. Last time we have been to Aachen altogether and we had the possibility to show you a couple of our textile reinforcements and textile production processes. That's why I'm happy you are familiar with the topic. I am just in detail, I am from the Institute for Textile Technology from Aachen, RWTH Aachen. That's why I will talk about the textile reinforcing structures first and about the advantages of these structures. And after that, I will show you our approach to integrate the textile reinforcement into the 3D concrete printing process. So why are we using these textile mesh structures? Because they are non-corrosive structures. So in comparison to the steel reinforcement, we have this corrosion problem no more. And that's why we can realize very thin-walled concrete structures. It depends on the application, of course, but up to 80, sometimes 85%, we can reduce concrete amount. Concrete amount means also cement amount, and it means CO2 emissions. That's why we can reduce all these amounts and CO2 emissions up to 80 or 85%. We have other advantages also. When we use less concrete, we also have more space on the same place. And that's why we can have more rental income, for example. We have higher strength. It depends on the textile material. When it's carbon fiber, we have 10 times more tensile strength, for example. We have longer service life, so we couldn't prove it because the material is a new one for the construction industry. Like, since 30 years, we are doing research on it. But we believe that we will be able to have a service life up to 100 years, with alkaline resistant glass fibers or carbon fibers. So on the other side, we have another revolutionary building material. This is 3D-printed concrete. And of course, we are interested in it to use our textile mesh structures in this 3D-concrete printing processes. Why are we doing this? Like, Igor, we did also some research on the state of the art or state of the research. The research regarding 3D-CP focuses on, in generally, the manufacturing or digitizing methods and the integration of, in generally, steel fibers or short fibers. And we have unknown material properties on composite levels. So there is no integration of two-dimensional or three-dimensional textile reinforcement on the industrial level, I mean. And no research on the material characterization on composite level. That's why we would like to close this gap. We also checked these couple of projects. When we look from the process side, so there is first printing and then reinforcing, or first reinforcing, then printing, or both together, Igor showed us also a couple of examples at Technical University, Dresden. So when we want to reinforce a wall structure, for example, we need in vertical direction our reinforcing material. Steel or textile reinforcement, it doesn't matter. Because reinforcing materials, they can absorb the bending tensile strength or bending loads. So when I print in the horizontal direction, I have always a problem. When I have in the vertical direction my reinforcement mesh or cage. And we checked a couple of projects, and the both of them, they were very interesting to us. But of course, we looked for disadvantages and checked how can we change these disadvantages. So from the Chinese company Huachang Tenda, the approach is very good because we need to embed the reinforcing structure fully in the concrete. But I am sure that you have seen also the video. From the pumping source till the printing part, till the nozzle, we have a long way. And these parts are not stable. So two workers are also walking with the robot during the production. So we would like to change this. And here we have a shadow effect on the other side. So when we print from one side of the reinforcing mesh, on the other side, we cannot totally embed the reinforcement with the concrete. So that's why we thought, OK, what can we do? How can we change this? And how can we make advantage from these points? So this is a demonstration of the printing process. On the other side of the reinforcement, we need a formwork to be able to close the reinforcement from the other side. And when we have a formwork, so 3D printing is actually not clever because we would like to eliminate the formwork. That's why we are trying now to work with navigable formwork, which is moving with the printing head, but on the other side of the reinforcement. It is just a demonstration. So we don't need exactly a robot on the other side. It could be a very small but stable device, which is moving together with the reinforcing head. And we are looking for sensory connections between them to be able to keep the wall thickness on the width. Yeah. And what we have done first, so we did some preliminary tests. Here we used a bi-axial warp knitted textile mesh made of alkaline resistant glass fibers. And we have coated it with epoxy resin. So we had a very rigid mesh in the end. And after that, I will check. After that, this is just the demonstration with acrylic glass so that you can be able to see that how we can close the mesh from the other side. So we have created some four point bending test specimen and then have tested it. You also see a bottle. It was fun. It was fun. But we went from the right city. Yeah. We didn't realize it at this time. So after that, I didn't have any chance to change the video, of course. Yeah. And we have used two different concrete types. One of them is a printable concrete mixture, Sika Groud, from the company Sika. It is also a new developed concrete mixture. And we received it from the company. And the other mixture is our reference concrete mixture, which is a fine grained concrete mixture, which we use all the time at the Institute to compare our textile structures with each other. And after that, we have tested this specimen. And here you can see the preliminary results. So this one is our casted reference TRC. Here we have an ultimate load, so approximately, so like at 23 megapascal. And after that, of course, we didn't want to change the process from casting to printing and also at the same time the material. That's why we have also casted the Sika Groud mixture to be able to compare both of them at the beginning. And it was very similar, actually, when we don't count this specimen. It was very similar to the reference TRC structure. And then we used this process, which I've presented to you, and used, again, Sika Groud, but printed it. And as you can see, the results are better than casted ones. So we are sometimes up to 30 megapascal. And so we continue with this research and use right now this mixture. And what we want to do and what we would like to do, together with you, is the adjustment of the flow properties of the concrete, because we are a textile technology institution. We don't have any idea when it's about concrete mixture and printing. And so we would like to investigate a tension between the concrete layers first, and also between the textile reinforcement and the concrete metrics. And we are also now conducting research on the sensor connection between the navigable formwork and the concrete printer. Yeah. So if you are interested to work with us, here you can see my contact details. I would be happy to answer your questions now.
One major challenge preventing widespread introduction of digital concrete production is the integration of reinforcing materials. Textile grid structures offer a possible solution for this challenge. Textile reinforced concrete (TRC) has been researched for approximately 20 years and is currently being commercialized, initially in pre-cast elements for facades and bridges. TRC enables the construction of thin-walled, strong structures with a high freedom of design, properties well suited for the integration in digital concrete production. First trials for this integration have been performed and published. However, these studies only use short fibres mixed into the concrete matrix or textile reinforcement within the printing plane, which limits the transferred loads. This study shows the results of preliminary tests of vertical, out-of-plane textile reinforcements for digital concrete production. The textile reinforcement is fixed vertically and the concrete printing process is performed diagonally, “through” the textile. The results of four-point bending tests are presented.
10.5446/56108 (DOI)
Yeah, welcome to my presentation today. I'm Niklas Pointe, and I'm delighted to present you recent results of an experimental study we performed on interlayer reinforcement in short-creed 3D printing. In this study, we want to investigate the effect of different cellular dosages on the resulting bond behavior. As you all know, in the field of additive manufacturing, reinforcement integration represents a highly relevant topic. When we look at the traditional way of casting or traditional way of fabricating reinforced concrete elements, the integration of reinforcement is very simple. You place your reinforcement in the formwork, you fill the formwork with concrete, and the result is a reinforced element. When we now look at the additive manufacturing, everything is different. We have our additive manufacturing process, we have the reinforcement element, and we have to bring these two things together. So we have to rethink the integration of reinforcement. And when we develop new strategies, we should take some fundamentals into account. We want to maintain the process automation, and we don't want to reduce our geometrical degrees of freedom this novel technology gives us. And we should more of a use this innovative character of our additive manufacturing and create novel strategies of reinforcement integration, and for example, novel reinforcement arrangements to save material. And the key is that we have to gain a good bond between the reinforcement element and the concrete so that we can produce elements with high mechanical performances. In the literature, you find several approaches on the integration of reinforcement. You can basically distinguish between strategies where the concrete supports the reinforcement, so the concrete serves as a supporting structure for reinforcement, or the other way around, where the reinforcement supports the concrete, or incremental strategies like my colleague Kala Matheo has presented before. In my presentation, I want to focus on the interlayer reinforcement, and you can find this here in the upper left. Interlayer reinforcement is a very simple and fast reinforcement method. You can conduct this in parallel to a printing process. So you have to conduct this in parallel to the printing process, because you use this layered characteristic of your additive manufacturing process and place the reinforcement inside the interlayers. So you can use conventional use reinforcement materials. You are very flexible in the rebar diameter, in the rebar material itself. And when you combine this reinforcement method with an adaptive path planning, so where the layers are not only horizontally oriented, but they are oriented in an angled manner, you can easily produce force flow compliant reinforcement layers. And as the title of my presentation, yeah, I want to perform this study, or I want to perform this experiments with Shortquit 3D printing. So let me introduce Shortquit 3D printing first. Shortquit 3D printing, short SC3DP, is an additive manufacturing method that can be categorized in the material jetting category according to the violin classification. The material is applied with high kinetic energy, as you can see in the picture here on the left hand side. And this leads to a very good interlocking between the layers and a good compaction of the concrete you're depositing. And you can change a lot of process parameters. You can change nozzle distance. You can change nozzle angle. And you can change different volume flow rates. And with help of this, you can change the strand geometry, for example. You can say that Shortquit 3D printing is an additive manufacturing method for the production of large scale elements. And a good example is this double curved reinforced 2.5 meter high wall that was printed a few years ago at our digital building fabrication laboratory. And as I told you that there are a lot of process parameters you can change, this was exactly the point where the motivation of this study started. Because when we now look at the interlayer reinforcement, we have a lot of parameters that could affect the resulting bond. And in this illustration here, you see some of them. So the parameters can be material related, for example, the accelerator dosage or process related, like, for example, nozzle distance or the different volume rates like the volume airflow. And in this study, I want to focus on the accelerator dosage. We use accelerator to control the structural buildup of our material. And with this, we can control the building rate. So we are changing the rheological properties of our material. And we ask ourselves, OK, this could have an effect on the bond behavior of integrated rebars. So we produced specimens at our digital building fabrication laboratory with the shot grid process. And you can see a schedule illustration of these specimens here on the right-hand side. Basically, these specimens consist of six layers. So we printed three layers. Then we placed reinforcement bars perpendicular to the printing direction. And then we covered it with six additional layers. We tested three different accelerator dosages, 0%, 2%, and 4%. And the bond length was limited to five times the rebar diameter, which was 12 millimeter. So we have a defined bond length of 6 centimeter. We cut the specimen here in fresh state, and we gained different specimens out of it. Three pull-out specimens that were printed, one specimen with a carbon bar that we later used for computer tomography scans. And we produced reference specimens in molds where we cast the material. And as you can see here, there's also a formwork because we covered the printed specimens with concrete. So that we get better comparability for the geometry. And we have a more uniform load introduction for our mechanical tests. However, there is no additional bond because this plastic sleeve here was longer than the formwork. So we have no additional bond here. Here are some prisms from the printing process. On the left-hand side, you see the printing process of the third layer here with our shotcrete nozzle at the DBFL. And on the right-hand side, you see a specimen. And an important detail is that we have a height adjustable support structure here that was fixed to the specimen plate so that the rebar could not tilt after it was inserted into the interlayer. So to sum up, we want to evaluate. We want to investigate the effect of accelerator dosage on the resulting bond properties to start the feedback loop and to see the limits of these reinforcement method. Because we want to see the effect of accelerator dosage, we firstly have to check what is the effect of the accelerator on our rheological properties. So we performed fresh, concrete investigations. Then we want to see what is the effect of an abonding zone. And so we conducted computer tomography scans to look at the bonding zone and see the quality of the bond. And finally, we are interested in the mechanical performance of our specimens. So we performed pull-out tests according to Rylam RC6 28 days after manufacturing. So let's start with the fresh, concrete properties. We performed this test with a shotcrete penetrometer. This shotcrete penetrometer measures penetration resistance. And we did this directly after the printing process. So we have a zero minute value. And a zero minute value gives us information on the material properties at that time where the rebar was inserted and the material properties of the material to the time when it covers the rebar. This penetration resistors can then be recalculated to a yield stress in accordance to Lotens et al. 2009. And here on the right-hand side, you see the result. On the x-axis, you see the accelerated dosage. And on the y-axis, the yield stress. And you see between 0 and 2%, there is no significant change in the yield stress for the zero minute value. So it's not, don't think that the structural buildup is the same, but it's a zero minute value. And when we increase the accelerated dosage to 4%, we see a significant increase of the yield stress. Here it's about 50 to 20 kilopascals. And for 4%, more than 60 kilopascals. But does this change in yield stress has an effect on the bond strength and the bond quality? So to answer this question, we have to look inside our specimen. And as I told you before, we use computed tomography. Within the analysis of our computed tomography scans, we defined a region of interest. This region of interest has a circular shape around our rebar and the diameter of 32 millimeter. Our rebar is the diameter of 12. So we are looking at the area of 1 centimeter around our rebar. And the length of this region of interest, so it's a volume, was set to 60 centimeter, what is equal to five times the rebar diameter, so the bond length for the puller test. And then we use grayscale analysis. And we defined the volumes, or we calculated the volume of air of our concrete and the rebar. And we can define a void content, which is equal to the volume of the voids divided by the volume of the region of interest minus the volume of the rebar. And here you can see the results. For 0 and 2%, the void content is in the same range. It's about 2 volume percent. However, when we look at the specimen with 4% accelerator, the void content is as twice as high as the values for the 0 and 2%. So let's remember the results from the yield stress. And we can correlate this, and we can see that with an increase in yield stress, we have an increase in voids. So are these voids located homogeneously around our rebar? This was the next question. And so we looked at the bonding zones. And no, they are not evenly distributed over the rebar diameter. In this picture A here, you can see the space between the ribs of the rebar from the top view and the bottom view, the top view that's this side that was facing the nozzle. And you see that there are not a lot voids here, some small. But when we look at the bottom side, there are huge voids under it. So especially for 4% accelerator, so a material with a higher yield stress, voids are increasingly identified in the bottom zone. And you can also see this here in a cross-sectional view, here are voids under our rebar. And the second location we could define or we could find was the existing voids under or above the ribs. And this could be based on rebound of the material. So now we want to see what is the effect of these voids on our mechanical performance. And so we performed this pull-out test according to Rylim RC6. The geometry of our specimen is a cube with an edge length of 20 centimeter. The bond length is here located at the bottom of this. You fix your cube to the testing machine frame and then you pull this rebar out of it. The bond length, as I told you before, was defined by the rebar. And as I told you before, was defined to 6 centimeter. I re-performed this experiment's displacement control with a defined displacement rate of 0.02 millimeter per second. We performed this experiment for printed specimens and for cast one. So let's start with the results for the cast experiment. And we see no significant effect of the accelerator on the bond strength. This is a very nice result. So because we have a homogeneous bonding zone, we placed our concrete in the formwork and we vibrated it. So we see there is no significant effect of this accelerator in the range up to 4% on our fresh concrete properties. So here is the bond strength and here the accelerator. So now let's take a look on our ShotCreate 3D printing results. And yeah, we can see two things. The first thing is that the results of our ShotCreate 3D printed specimens are always above the cast runs. So the bond strength is higher than in the traditional cast process. And this could be due to this high compaction of the material within the printing process. And the second thing is that we see an effect of the set accelerator on our bond strength. And this can be correlated to the void content. So for 0%, we have 26.2. And this decreases to 22.8 megapascal for 4% accelerator. And there's a reduction of about 13%. So when we now bring this together with the void content, we can say that traditionally our bond strength is related to this increasing void content. So when you have more voids in your bonding zone, then the forces of the pullout have to be transferred through a smaller contact area. So you have peak stress over your bond length. And so earlier failure. So to sum the results of our experimental study up, the integration of interlayer reinforcements represents a very promising, fast and simple integration method. And we have very good results in the bond strength with the effect of a accelerator on the bond strength. However, up to 2% accelerator, we could create very homogeneous bonding zones. And regardless the accelerated dosage we resulted, or the results show higher values than conventionally cast specimens. That's from my side. Thank you very much for your attention. And if you have some questions, don't hesitate to contact me or to ask me now or later via mail.
Additive manufacturing with cement-based materials has recently become increasingly common on construction site. The high degree of freedom in individual geometric shapes, the associated potential for resource-efficient designs, and the high degree of automation could make this technology a milestone in the history of construction industry. Many of the existing additive manufacturing techniques are initially based on unreinforced concrete. However, for many structural elements, the use of reinforcement is indispensable and therefore the reinforcement integration represents a prerequisite. One promising reinforcement strategy is the use of interlayer reinforcement. This method specifically uses the layered characteristic of the additive manufacturing process by integrating reinforcement between the applied layers. In combination with an adaptive path planning, it is therefore possible to manufacture force-flow-compliant reinforced elements with a minimal increase in process complexity compared to an unreinforced production. However, besides the integration process itself, material-process interactions represent an important research topic. Especially for Shotcrete-3D-Printing, the use of accelerators can significantly change the structural build-up of the applied material and thus effect the bonding ability of the sprayed concrete to the integrated reinforcement element. The present study investigates the effect of accelerator dosage on the bond properties of integrated rebars. The resulting bond is analyzed non-destructively via computer tomography and mechanically by pull-out tests according to RILEM RC6. The results show that the material compaction caused by the sprayed application leads to excellent bond properties. However, when high accelerator dosages are used, bond deteriorations can be observed.
10.5446/56026 (DOI)
Dear audience, welcome to my presentation on strategic management in higher education institutions. I'd like to show you five basic lessons learned, which are drawn from the experiences in the higher education sector. And let me start already with number one of these five lessons learned. Number one is strategic management could create huge benefits for universities and faculties. It could be definitely more than just a ritual. So let me explain what that means. If you develop a strategy, if you realize strategic management in a university, usually you are torn between two different orientations. And let me take the example of a faculty. So assume we are a faculty, you're a dean, and now you have to develop a strategic plan. So your president or your vice chancellor is coming and say, please make a strategic plan. So he requires a development plan from you. He or she wants to have a target agreement with you, fixing the goals and the objectives that you have to achieve. So you have to follow the rituals that the university is defining for your faculty. So what are you usually doing? You aggregate the things that you are doing anyhow. So you put together all the things in teaching and research and write quite often a quite comprehensive strategic plan. So you try to fulfill the standards. You try to fulfill the obligations that are put upon you. You follow routines and you legitimize what you are doing because you can say, president, look here, here's my strategic plan and I'm going into the right direction. The impact of such an exercise usually is quite low, but there is a completely different side of the coin. You could be an autonomous faculty which is steering with goals. You really use your strategy to steer what's going on in your faculty. You use the strategy to improve, to make a major change, to use it for change management and to move crucial steps forward. And if you do that, of course, you ask what is in it for us? How can we as the faculty benefit from the strategy? We could become an entrepreneurial faculty if we have that kind of idea in our strategy and we could follow that plan. So this is really finding your own pragmatic approach. This is your institutional management. And there could be a contradiction, of course, between the two things. You have to follow the external rituals and you have your internal objectives. So how to deal with that? Of course, you do what you have to do. You cannot tell to your president, I'm not interested in your strategic planning process. You have to be part of it. But at the same time, you could try to create benefits for your faculty. You could use the autonomy that you have to do something that is really beneficial for you. So we can really ask that question, what is in it for us as a faculty? And let me come to the potentials of a strategy. So what could be the answer to this question? What is in it for us? So what could be the real benefit of a strategy? What could be the function of a strategy? And if you have the function, we'll come quite quickly to the principle that the form of the strategy should follow the function. So what could be the functions? The first is the ability to reallocate and to coordinate and to justify your decisions. If you want to take away money from one department and put it into another, a strategy helps because a strategy can say what this department is doing. This has, there is a preference in our strategy for their activities. So that's the reason for reallocations. If you want to do that, what are the instruments that you can use? You can use formula funding. You can use a strategic pool where you say we are funding strategic developments and those who follow can get the money. You need a strategic human resource policy. You have to prioritize your goals and you can sign agreements with between the faculty and the department, for instance, about the targets that have to be reached. So this is coordination reallocation. The second function is a strategy is quite a good tool to build critical masses and to create synergies, especially in research. Quite often we find we cannot do it alone. We need partners from other universities. We need partners within our university. We need a critical mass of people. So what can you do again coming to the instruments that are relevant for that? You could say as a university, we are going to create internal centers of research excellence. And then we would say we do not finance research across the board for every member of our faculties, but we put the money in a focused way into a center of excellence. And you could have a competition between these centers of excellence and only two or three or four in the end will be nominated to be such a center. And this is the place where you then would have enough funding to build a critical mass. You could form strategic alliances or you could focus on cross cutting topics. I will come back to that later on. Three more functions. Of course, a strategy helps you to adapt to a changing world. The outside world is changing all the time in rapid speed. So you can have a SWAT analysis, strength, weakness, opportunities, threats in order to adapt to what is going on outside. You could involve your stakeholders into your strategy development. And you could have a very efficient planning process to come to decisions in time to be able to adapt soon to a new external trend. This point is a strategy, of course, is relevant for making the work of your university or faculty members meaningful. It creates identity. It creates motivation. This is really an internal objective that you have. So if you want to have that, you make large strategic workshops. You focus on your values, on your mission. You have a lot of internal communication about your strategy and you have goal oriented rewards even for individuals. If they follow the goals, they get top ups in their salary, for instance, for instance, being rewarded for research performance or publications. So making the work meaningful, creating identity. And now this is not the internal, but this is now the external function. You can demonstrate your the famous unique selling proposition. So what makes you unique? You can make clear what you stand for. Of course, then you have to invest more in outside strategic reporting in your mission statement and in external communication. And the instruments already show there could be contradiction. There is not the same instrument for every function. Look here at these two things, the efficient planning process and large strategic workshops. If you want to adapt to a changing world, you have to be quick. If you want to motivate your people, maybe you have to be slow and take everyone on board and hold meetings all the time. So there are different instruments for different functions. The same if you have more the internal aspect in mind, you focus on internal communication. If you have the external, the outside world, your stakeholders in mind, then you focus more on external communication. So this is what I mean by form follows function. So be clear about the function and adjust the form of your strategic planning process to the function you want to have. So that was number one. And here is number two. I think universities and faculties need a very, very good story for their strategy, for their focus, for their priority. And I would like to present to you some examples for that. We really find a variety of typical content or priorities of universities and faculties. And just to say, well, we want to be excellent. That's not enough. You have to be more specific with your story, because what means excellence? Everyone wants to be excellent. And here are a couple of examples, some German, some European examples, but you will find the same things around the world. The first is many universities focus on topics and transdisciplinarity. So there are many universities in this world saying, we have different faculties, we have different disciplines, but we want them to work together on those topics where we are strong. And here's the example of the University of Wuppertal. That's a German university. They say we have strategic priority areas, and you can see them here. It's one, two, three, six different areas. It's building blocks of matter and education and knowledge in social and cultural contexts, health, illness prevention, and so on. You see these different topics, and they say we invest our research money and we build up teaching capacities around these six topics. This is our strength. This is where we can contribute as a university. So they really focus on topics. And that's an interesting thing that many universities around the world are doing. The next one is especially international, large research universities. They quite often say it's also about topics, but they say we are the ones to tackle the grand challenges of today and tomorrow. And you have the example of the University College London here, which is saying our strategy focuses on global health, sustainable cities, cultural understanding, human well-being, justice and equality, and transformative technology. So they say, well, look here, we are contributing to the challenges of the societies of this world, to the challenges of mankind. That's ambitious, but of course it's also a clear and a nice profile that you can follow. Maybe you have to focus on one or two of these grand challenges if you're not so much a comprehensive university, but that's another way of prioritizing. The next one, the famous Dutch University of Maastricht. What is the University of Maastricht famous for? And everyone around the world knows them for that. They built their profile around a distinctive innovation in teaching. Because University of Maastricht is one of the mothers and the founders of the idea of problem-based learning. And if you look at their website, they say what makes University of Maastricht special? So how are we different from others? And they say it's our education model. It's the European pioneer of problem-based learning, the education model that has been working and so on. So this is their profile. So they have a specific innovation in teaching and learning. And this is also a very good idea to build a focus and to make clear what you stand for. So student, if you go there, you get problem-based learning in every program, in every lecture. And that's a fantastic profile. But there's even more. Universities could focus on target regions. The University of Bayreuth, you see it on the left, screenshot from the website. The University of Bayreuth in Germany is famous for they focus on Africa. So they have a lot of programs. They have an excellence cluster and so on, which is really focusing on Africa. That's a target region. Or on the right-hand side, the FOM, that's also a German University of Applied Sciences. They say, you can see the claim here, we are the university for professionals. So they have a specific target groups, professionals, people who are working and who are studying part-time next to their work. And they say, this is what we are famous for. So if you are someone working, if you're a professional, come to us. That's our profile. Again, very distinctive and very specific. I think I have two more. So here you see some small, high education institutions also sometimes make a very clear point for the university as a whole. We have the famous one in our country. That's the Abus-Walter University for Sustainable Development. So they even have their profile and their strategy in their name. Because they say, we are the university for sustainable development with a distinctive profile. And you can see from the description here, they did it now for almost 200 years. And they started with forestry and things like that. And everything they do is focused on sustainable development. They want prices. They want the price as the greenest university and so on. So very, very clear profile, very well positioned in the German landscapes. This is a very small university, but everyone knows them in a very small city because they are so special. And my very last example is you can also build your profile around your values. We have the private university, the Wittenherdek, also a German one. And they say, we build our profile around the idea that knowledge and competence being reflected in practice and developed and enough freedom for students to develop their own personality, values, combining curiosity, dedication, and so on. So their idea is, we define ourselves through common values. And they put the values into the focus of what they do. And again, they are very well known for that. So again, a very specific profile. And so the message here is, think about it. No matter if you're a faculty or a whole university, what makes you special? What could be your profile? And try to go that way. So that was number two. Number three, it is very clear that the process, the process of strategy development is relevant for the quality of outcome. And what is important for the academic context, the process always has to be a mix between top down and bottom up. You need them both. Let me explain. A strategy process should be carefully conceptualized as bottom up and top down process because of different functions. Bottom up. In a university, you only get motivation, creativity, ownership of people. If you develop the things, bottom up. So what do you have to do? You need full assemblies. You have to run projects, consultations, evaluations, workshops, working groups, and so on. So many, many activities coming bottom up. But at the same time, you need efficiency of the process. You have to move forward. You need coherence. You need coordination. Therefore, you need a schedule. You need frameworks. You need priorities, incentives, funding. You have to establish units to follow your strategy, and that has to come top down. So you could see one of them only is not enough. You need them both. Bottom up motivates. Bottom up promotes the creativity. But top down gives a direction, leads to an efficient process. And if you are in the position to design a strategic process, if you run strategy development, one of the most important things is reflect what are the steps and what is going bottom up and what is going top down. And if you do that in an elaborate and thoughtful way, this is already 50% of succeeding with your strategy. So that was number three. Number four, universities are quite good at strategic planning, but often have difficulties with strategy to action. So implementation of strategy. And I would say to university, please use the tools for strategic implementation. Don't neglect implementation. There is often a lot of engagement in developing ideas, developing a strategy. And then in the end, you have the strategy on paper. You show it to the outside world. You show it to your people in the university. And what happens then? How can you implement? And this is a very important step. And let me take an example. I would like to take the example. You as the university, you want to increase your relevance in research through interdisciplinary collaboration. That's our assumption. So this is our goal. So how can you put this strategy to action? And a couple of examples. Of course, you need internal communication. You need internal facilitation. If you want interdisciplinary collaboration, you need workshops about it. You need retreats. You need research scouts, people running through the university, identifying those professors who could work together from different disciplines. You need task force, projects, activity planning, proximity. So if you want people to collaborate, you have to put them in offices next to each other. And this is also, of course, something to put strategy into action. External communication and marketing of equal importance, of course, so you can use your interdisciplinary research topics to structure communication, to create a website about it or whatever. Performance measurement. You need the right KPIs, the key performance indicators, the targets, agreements about it, reporting, to put strategy to action. You need transparency. Have we already been successful? And you can see it from the KPI. Funding resources. Of course, if you want to promote your strategy, you have to fund it. You have to provide the money to realize. For instance, if you have a formula where you count the number of PhDs as a kind of success criteria, then if you have a collaborative PhD where two faculties work together, you could have double count of that PhD. So implementing it into a formula or a competitive pool or target agreements, also with a professor's appointment, you can already set a target agreement with the individual professor. You have to provide a contribution to interdisciplinary collaboration, so incentives. You could even create a unit which takes care of the whole strategic idea of research interdisciplinarity. I have the example of the Technical University in Darmstadt, again a German one. They have something they call Forum for Interdisciplinary Research. That's a platform for the whole university. It has an organizational structure. There's a director. There are fellows for two to three years. There's an administration. And they really engage, that's a unit that engages in agenda setting to promote topics, digitalization, water and so on, but everything related to interdisciplinary research. And they run workshops, conferences. They organize an annual day of interdisciplinarity. They have small internal competitive funds to push activities and they provide services for bottom-up initiatives. So this university says we have a strategic goal and we set up a unit in order to push the strategic goal forward. This is implementation. This is strategy to action. And my very last example is you can institutionalize transdisciplinary collaboration with the famous Matrix structure. Imagine you have a university with four faculties, different disciplines. Now what are you doing? You implement cross-cutting profiles. This is, I think it's the example of a Dutch university, profile number one, dynamics of youth, number two, institutions for open societies, pathways to sustainability and so on. And now what does it mean to create a profile? It means you create a cross-cutting structure. So on the one hand, universities is structured vertically in faculties, but horizontally it's structured in profiles. And there could be a separate infrastructure, a governance structure, funding for the profile, and staff members that are working for the profile, a building where the profile is positioned, evaluation of the profile and not of the faculty. So you see, it's a new organizational principle. It's a matrix. You do not only look at the faculty, you organize the university according to the profile. And this sets a very strong incentive for the strategic goal that we are talking about. So these were my examples that I wanted to give you very specific, specific topics, specific initiative to increase transdisciplinary research, but you can see how important implementation and strategy to action is. Okay. And now already the last one. Remember five lessons learned. There could be more, but I picked out five. Strategy strategies often fail in practical implementation because of typical university issues. And what is important, instrumental design. Instrumental design could help to avoid the failure. And let me in a minute, again, give a couple of examples. First of all, I'd like to show you the standard process of strategic management. You will know it. It sounds quite easy. You initiate an analysis of the environments, of the strength and weakness, opportunities and threats. You reveal, coordinate and develop strategic directions and options. You identify and foster a profile, a vision, a mission. You prioritize, set and communicate goals, set a strategic plan. You pursue the goals. You apply steering mechanisms. You motivate. So you implement, remember implementation. And you supervise your goal achievements with monitoring and incentives. And the whole thing is a cycle. It's always going in a cycle. So sounds quite easy, but there are many obstacles on the way. And the question is how to overcome them. So there are some typical failures of strategic university management. But the good message is there are potential remedies to these failures. So let me give you some examples. In many universities, there is the danger that strategies are irrelevant for everyday life. They are on paper. They look nice. You can hang them to the wall, but it doesn't matter in everyday life. So again, strategy to action, participation, communication. This is what it takes to make it relevant for everyday life. If nobody in everyday life sees anything of the strategy, it's not worth it to develop it. Often, we have quick starts that fizzle out. Yeah, everyone's ambitious and you're starting. And then at a certain point, things fade away. So you need process promotion. You have to run strategic development as a project. And you need the power to push this thing forward. You cannot, as a leader, you cannot say, yeah, I leave it to someone else. You have to do it. You have to be the one promoting what is going on. Often we don't have cycles. Yeah, university, one-time action strategy is there, but there is not a cycle and there is no feedback loop to revise your strategy. So you have to implement cycles. The question is how many years does it take to have vision, strategic plan, targets, action plan, communication, and then starting again. But definitely you would need a cycle. There's more. Action strategies are like being set in stone. There is no flexibility. There are five-year plans and nobody changes them. This means you need regular cycles for adaptation and revision of strategies. You need continuously, you need to assess what you are doing. You cannot just say, oh, we have the strategy. We're happy. Let's look at it again in five years' time. No, it's a continuous process. Often there is no follow-up to incentive participation, which leads to disappointment. I know universities who made a big fool assembly and everyone was discussing about the strategy and there were high expectations, but then somehow people didn't hear anything again about the process. So they were totally disappointed. So you need a follow-up process. You need feedback loops. You need maybe a regular leadership statement, how far did we come in our strategy? And you have to be transparent about that. Quite often the missions of universities are quite exchangeable. I would recommend to you do the exercise, take a random sample of five mission statements of universities, delete the names, and then show it to your colleague and ask, do you know? Can you guess which university it is? And you can't because they are quite the same. So again, think about, go beyond the general excellence claim, develop your profile. Remember, I think it was my lesson third number two. So this avoids the problem of exchangeable missions. Last slide on that. SWOT analysis usually is too much inward-looking, not enough opportunity and threat analysis. So you need stakeholder consultation, you need benchmarking that has to be part of it. The strategies often are very bureaucratic. So you have to keep it alive with a pragmatic approach. You need a lot of dialogue elements. Don't do too much paper shooting because then it becomes bureaucratic. It has to be lively. It has to be communication. And there is often no strong link to funding, which of course also could be solved by having a strategic orientation of funding with formula funding or competitive funds. So think about instrumental design. Design what you do, the process that you develop in a good way and there are enough ideas how to do it. So you will be able to solve it and good instrumental design is crucial. So here are my five lessons learned again in a very short version. Number one was focus on specific benefits and respect the principle form follows function. Second create a convincing story. Third take care of the process. Fourth turn strategy into action and implement. And fifth be aware of potential failures and avoid them through instrumental design. Five lessons learned. There could be more. But these five I think help you to run a strategic development in a university in a good way. Thank you very much for your patience and attention.
In this video, Professor Ziegele explains the basics of strategic management for universities. Strategic management can bring tremendous benefits to universities and faculties. But university and faculty strategies need a good story and priority. A good mix of top-down and bottom-up strategies should always be developed. The biggest difficulty in universities is often to implement the developed strategy. These difficulties can be countered by developing good tools to implement the strategy.
10.5446/55782 (DOI)
So, as you saw probably in the news and on our social channels, we released this open source spectra library version one. And it's ready for testing. And we basically calling people to contribute now. So you can use it possibly actively. It is a true community project. So we hope in a well, in coming months or definitely in 2022 that there will be some spin-offs and that you will find your way to use this. And what I will do in this presentation, I think I have 20 minutes, John. Yeah, that's fine. However long you need 20, 30 minutes. Okay. So I will, I will like talk about, you know, what's there? What's now available? And I will do some demos. And I will also show you where you find things, where you find data. And we, we have multiple interests of the data. So I will kind of show you like the main ones and then the rest, it's all documented. So you'll be able to follow. But everything I show you, I don't have to make any screenshots or anything. This is really open data open source project in everything I show you. There's a copy on the web. So you don't, you don't have to worry about writing down any URL or any parameters or how to access, et cetera. Just to start with, so we released it on the 5th of December, the world's all day. It was quite symbolic also, but you can imagine when you, when you pick up a date and then you realize that, yeah, because of Corona also complications and, and on the end of the last two weeks, I have to say we worked the night really. So we could reach the 5th of December. So it's not 100% operational. It's, it's really this system is for testing purposes only now. And we would like slowly to transition to operational, but that also depends on your feedback. So if we get a good feedback, then, and there's a good chance that we can improve and fix the bugs and then it slowly becomes operational. The moment there's a database, it's among the DB. They also, the models, we've hit lots of models. There's also a GitHub repository and API, which works also. And we started this book down book. So you can follow in the book or the documentation. This is what's available at the moment. And we fitted the models using the ensemble machine learning MLR in our and the global models. And there are some, there are limits. I mean, they're not perfect and they're still, it could be also in the data we imported, there could be problems. So we, we're filtering and so, so the models have to still be refitted. There will be version every model will get the UI. So that's what's available at the moment. You can also read on the, on the Woodward Climate Research Center, there's a post about the open source vector library and they also all the links you can find there. So also a place we recommend to use to learn about it. We got quite some help from our colleagues from Belgrade, the GI lab. And they did basically the, the front end, which is the, which is built in our shiny and it's also open source. So you can access all the code. It's also available. And so you can see how it was the front end, how it was made. Our front end was quite the work actually more work than I thought. The back end, we kind of knew already in May, what we want to do. The front end, yeah, we were going up and down and, and also for modeling. We also wanted to do more at the beginning, but later on we decided, let's, let's have like the first version, you just have global models and then, and then we can extend. So very important to open the source vector library. It's a really true open data open source project. We use MIT license for the code and we use a CC by license for models and data. So it is a truly open data project. It's also community project. We aim at open development community. So you can see on basically on GitHub, it's all documented and the different journals and people already started asking questions. So we already resolving in a community setup. We are low budget. I mean, you know, we don't have the budget as these companies. So how can we make something that we can serve, let's say 10,000 people? We do believe it's possible and one of the models is telegram, for example. It's possible if you know how to use the infrastructure so that I think you can make it also quite cost effective. So we do believe we will be able to serve thousands of people per day eventually. And of course, we are, I should mention, we are USDA funded project. So we run until I think 2023, John, if I'm right. Yes, 2023. So the project will continue. We expect a lot of development next year, especially and hopefully there will be some workshops and there will be more and more people contributing models and we really hope in 2022 that it will start living as a community project. This is just the most general design just to mention something that's the back end and from the user, from the client side. Yeah, we run the, we use the MongoDB to store data. Then we have this machine learning engine, which is based on R and we use this parallelized version of R and we have some high performance computing system that we feed the models. The models that are now available, they take about two days to fit all. But if we use our high performance computing center at OpenTur hub, we can compute everything theoretically in two hours. So we could do like very fast updates. But otherwise it's, there's a lot of computing. And then we have the Plumber package for the API and then people can access this data also to R or they can get a CSV file. And we also try to georeference everything so you could also open data in QGIS and look at the values. And as you see, there's a document where you can read about all these things. So that's the OSSL manual. The Github at the moment, it has multiple repositories. It has separated the repositories, one repositories for the technical documentation. Then one is for the imports. So this is all the steps that we implemented to import foreign data. And then there's the repo on the models and there's a one more repo on the front end, the apps. So these are the repos on the Github source spectroscopy. And the book is slowly coming. It has, I will say, the main chapters are there, but there's still a lot of things missing. But you can, I think you probably already found that you can read about the book. And also, you can, it's possible if you want, you can go and let me type this in. So if I go to the homepage of the project on Github, you see there's the manual and you can see the URL to access the book. And you see there is a button to edit. So it's also possible if you want to fix some text, if you find something, you can also edit. You say, no, this is a wrong reference or wrong name or if you want to register a new data set, et cetera. So you can come and edit. So that's about the Github. So like, for example, as I said, the new Spectra libraries, if you're aware of some library that we missed, you can come to this button and then you can do the edits and you can add a new library, if you're aware of. And then it will automatically pop up here. And then we just have to accept it. And so that's the book. Let me go back. The frontend. So we have, we have two frontends. One is one we call the explorer. And then the other frontend we call engine. Engine will be used for all the prediction, calibration, harmonization, whatever. And this one is just a viewer, basically. And it starts as a world map showing the density of all the points. And the engine, engine is interactive and engine, we thought how to do it properly. But eventually we, we agree that we need to have some kind of authentication. Because otherwise people could really crash it easily. So we put now we have our authentication and you can upload, you can upload some data. So here's the data. And once the data pops up, you can calibrate the data using either just a US, US Spectra library or using the global Spectra library. So let me use the Sol Carbon. And I pick up a model and you see when I run, execute in the background, the server makes the predictions. It's, it's fast. I mean, it would take practically one, one or two seconds. And this is for the four points. You can see the prediction. You have the, the mean value, the error. This is the log, log space. And you can see that then we back transform and you get the value in the, in the percent and you have the upper and lower prediction interval. So in this case, you, you have like some points we predict actually very low carbon. Here we predict a bit higher carbon 1.2 percent, but it's between 0.9 and 1.2, a 1.5. So 1, yeah, 0.0.86 and 1.5. And also you can look at each model. You can see the properties of the model. You can see that this model is, it's a relatively accurate. It's a base on the, on the MIR. And so it has a RMSE in log space 0.15 and it's based on, it's based on a 57,000 points. So that seems to be a, let's say, one of the best models actually. So of course, when you do a demo presentation, always pick up your best models, right? Never, never do some poor, but I'm, I'm, you know, I'm not a person with a background in socioscopic. I was actually surprised that we couldn't get this, this NIR spectra, you know, they're much significantly lower accuracy when you do the cross validation, five full cross validation. So I have to say now the MIR models, global models, for some parameters, I almost wouldn't recommend using it, especially ball density and potassium. But for the reason I am much more models, I almost, I wouldn't recommend, you know, predicting like numeric values, maybe predicting few classes, maybe three, four classes. So, so that's at the moment, I don't know if we can improve that, but at the moment, that's the situation. So that's the, that's the engine. And let me also show the explorer. The explorer has limit, limited functionality and you see every time you start it, we, we didn't know what to do, but on the end we decided to preload all the data because we said, look, if somebody wants to explore the whole thing, you know, let's just load everything and then, then they like, so at the beginning, when you first time load it, it will take you like two, three seconds. But when you load it, then you can, you can do anything and you can subset and download. So I'm looking now here, for example, some data sets, I can pick up this central African data set. So that's this one. And I can also zoom in somewhere and pick up some data. So here's the data. And what, what it will do if I click on some point, it will show immediately the spectra and, and this spectra, we, we use this plotly. So plotly is a, is a packaging arts. That's very nice. And this plotly basically allows you to also zoom in. So you can zoom in into spectra and you could, you could download, you could, you could pen and you can take screenshots, etc. So it allows you to do, to do, to really play with that data interactively. So this is the explorer. As I said, explorer is very, very simple. It's just, you know, you click and you see, you quickly see the difference in the curve curves. So this is a site in, as I said, in Central Africa, and you will expect that the curves they will, they will be, if they're close by, they should be also very similar. And then as you select, for example, I selected now the Central Africa, I could also just download that and when you download, you get, you get a CSV file. This takes time. If you select everything, if you select everything, it's like, I don't know, 400 megabyte. So it will take time to download all these datasets. I mean, these are not datasets we made. I mean, we, we only bind datasets. So if you really want to understand the datasets, you have to go to the rep or call OSSL import reports. And here we document everything, every step we do. So it's fully reproducible research. If you discover any problem, you can come back and report it. So this is the Central African Social Library. You can see where does the library come from. So there's a publication and also there's a GitHub repo and we downloaded things to Laura Sumerauer, Sumerauer is a sweet setting colleague. And so she, she, they published this paper and they made the data available. And now you can see how we do all the imports and every step, how we have to also harmonize here, we have to convert the values from the, from the centimol to the PPMs, et cetera. So this is the, sorry, no, that's the other way around from PPM to centimoles. So these are all the steps we do to harmonize the data. And on the end, we also plot, we plot the points. So we show where the points come, but it's all in our markdown. And basically we can follow that. And if you discover there's a problem with the import, then you can register it and then we try to fix. And I just want to show you there is a person today, just today, somebody says, hey, I'm using this date. I don't know if this person is now connected, but somebody today just went and says, I'm noticing there's a problem with the dates in the case of sale. And then I say, yeah, I have to look at it. It's possible something happened, but cases sell unfortunately only has a year. You know, if people really like to document everything, then you can imagine you could trace back exactly the date when somebody was doing field work. But in case of cases sell, we only have the years, unfortunately. So this is the idea of how the things on GitHub work. If you spot something, please register. Don't send me an email. Usually I don't respond to individual emails unless it's something internal. But if you want to improve something, just please use the GitHub. And then there's a group of people that can do fixes. So that's the explorer. And this is the estimator. Now, for the estimator, so what are the models you can use and what are the properties, all properties you calibrate. So first of all, I should say that we registered every soil variable. And we attached to every soil variable. Here we attach a unique URL. So every actually column in the open source data library has a unique URL. And this URL carries the metadata. So we decided not to touch metadata to data. We separated. And here you can see there's a lot of codes which are the type of soil variable. So these are the soil variables. We use this very simple system where we use three components of the name. So that's the code of the variable. That's the estimation method, laboratory method. And there's the measurement unit. So you have these three codes. And basically these three codes, you kind of, it's semi-human readable. I mean, you can see that's the aluminum ox, oxalate based on the USDA4G2 method. And we are looking at the weight percent. And this will be, for example, ball density. So ball density, and you see that's the USB 3B2. You can trace back that method in the PDF. We link exactly in the PDF where that method is explained. So you could potentially trace back really original laboratory method that we refer to. And we try to put as much as possible. So that's this 3B4, you see? Or 3B2. And so you can trace that back. And also we plot automatically a relationship between ball density and depth. You use a log scale often. So it's a log-log relationships. And you can see that the ball density with the depth slightly increases. So as you go deeper, the ball density is a bit higher. And also here you can see that the ball density is available only in three data sets. And you can see that there are some differences. The looks like the kaisers cell has a bit heavier source. This one is in ton per cubic meter. And so we generate all these plots and histograms. That's all automatic. I mean, I don't make that by hand, but I just make a code. And then once I compile the book, I get all the plots they get generated. And you can see this is a total carbon. Also there's a kind of a log-log plot. Then we sometimes you look at these plots and we see there are some points that stick out or there are some strange like here there's a disorganic source here. And they appear only in the NEON in the Lucas data set. They don't appear in other data sets. So you can look visually and you can see there's some outliers. And if there's some strange things. So these are the variables. And then there's a special repo which is called Models. And here we can see how we feed the models. Models also run fully. They are run fully automated. So this is the code to run all the models. And I made this code at the moment. But the idea is that we extend that and we will add more models. I'll talk about this. And I did actually very, you can see the code is not really long. I mean basically I overlay, I create this regression matrix. So I bind everything together and you finish with like a one table really for the whole world. And then you can load that table. I call it just the regression matrix RM.OSSL. Once I load it you can see all this data from different data sets. And I create some dummies. So based on the instrument used, we add the dummy variable. So in the modeling you also include the instrument. So the models also when you do prediction you have to say which instrument used. And then we do a bunch of models. And these models either based only on KSSL, the US data or the basic global data. We also use geographical coordinates and extra co-various layers. And then we have models where we use both MIR and VZNIR. There are some data sets that have both MIR and VZNIR. And these kind of a fusion models. And then we feed that model and model fitting you see it all runs in parallel. And it's just a one long chunk. And it just fit basically all target variables you get the models. And then the models get uploaded to S3 service. And then models are available to anyone in the world you can do calibration. So that's the from the modeling side. We also create these accuracy plots. And we provide these accuracy plots and documentation, accuracy of the models. It's all provided also to the S3 service. So when you do the prediction, we will attach to every result of prediction. We will also attach this model properties, kind of like a metadata. So it's very scalable system. And we could just keep on updating it. For global modeling, it looks very easy. I mean, you basically bind all the data together and you just feed the models. And then you just have to understand these models. And there is a one monster table. I will show you that there's a monster table which contains all the information about models. It says here, you see models met a CSV. And so I can take just very quickly look at that. So here we have the log of organic carbon content in weight percentage. And you see we have six models basically. And then you can see this is the variable. So that's the metadata of the world. So what are the combination of models? So we have the we have the MIR using the KSSL subset and using no geographical data. And then you have the model, which is available in this folder. So spectroscopy OSSL models log.oc. And this is the URL of the model. And then there are also other properties of the model, for example, the accuracy plot. And you have also model summary, the R square, RMSE, and the number of training points. This is all these tables are generated also automatically. I don't do anything by hand. And so you see that's the first model is based on the MIR. Then the second model is based on using the MIR. But with also geographical coordinates. And you can see this model. So in addition to MIR data, we use some 100 layers, global layers. So we also add them to the to the model. And you can see that this summary of that model, it's available here. So I will open that just very quickly to show you. So I get this summary. And if I open it, it looks something like this. So now this is a model where we use MIR data, but we also have global layers. So this is climate. Climate layer LST, land surface temperature from the September daytime. And it comes in the top 20. Also horizon depth also comes in the top 20. So the best is to use the MIR PC2 and PC3, but also we have the horizon and some. And this model has less training data because not all points have coordinate. So it's only 32,000. The RMAC, it's about the same as using MIR. So so in a way, we we think we can improve possibly accuracy of calibration if you do this spatial explicit models. And Keith Shepard knows about this, has spoke about with this about this, like maybe one year ago, he told me, oh, we should do this spatial explicit models. And then I said, how are we going to do that? And but now this is it. This is it's working. So so we have this is also special explicit models, but they're based on less training data. If I use only without the coordinates, then we have 70,000 points with coordinates. We only have 32,000 points. So these are the these are the summary models. And as I said, this is like a monster table. And you can see some of the models. They are based on using both MIR and visa and I are so it's a combination of MIR visa and I are. And there is a final model. So final level modeling, basically, total fusion. You use all the spectral scans and you use all the information you can find. And we call this kind of like a bundle approach. And you can see this bundle approach. It's explained here somewhere. So so we do this bundle approach and you can see the bundle approach. You can see how you can generate predictions with that bundle approach. But any time you want to make a prediction, you have to have the coordinates also for the point. So you see here, you have to say this is the these are the coordinates. This is the depth. So you have to specify all these things. And then you basically predict with everything you have. So so these are the different type of models. And I can show you just very quickly. I will show you these models in the book. So here's the book. If I maybe I have to increase a bit font so you can see. And then John, I will I will stop then and I will I will be open for questions. So here's the code. And if I go to the modeling and let's start with the first one. So you could also now if you know how to use our, for example, I will go and say I load all the functions I need. I also have to load all the packages just a second. So I load all the packages and then I load all the functions, which are all on the rapper. And then you see I just specify where the models are. So they are this S3 service. And I can now load the models into my R session. This can take a bit of time because the models are let's say about usually they are about 100 megabytes. So depending on your bandwidth, you can you see I'm still loading that model. It can take a bit of time. But once I loaded the models, I can now predict it, for example, for if I have a 10,000 spectra, I could go and predict for 10,000 spectra. And and here's the prediction. So when I load some MIR data, so I have a sample MRI data. This is something 20 20 scans and I have a thousand seven hundred sixty five bands. So it looks something like this. Then I can go and just do prediction. And you see predictions very fast. It takes a second or something. And out of prediction, we get we get this thing. So we have the predicted mean value error and then back transform value. But also in the in the prediction object, you can also look at the original model. So if you want to check the accuracy of the model and you also get the PC components. So transform transform values. So you can also make a plot if you want to do some diagnostics. But basically, you could use these models to predict some at the moment we have, I think, 16 variables. So you could predict all of these 16 variables for any standard. The MIR, these NIR scan anywhere in the world, you could just use. You could pick up the model you want to use and you could calibrate, whether you use the engine or whether you use the R. And also you can go and use the API. Of course, if you look at the API. So as the API and this API, if maybe if you're not a developer, you don't know what's the value of API. But API is really the central thing we are making. You know, that's that's going to be used because it's interoperable. It's for computers. And so here's an example of API where I can load some data. So here I load the file that we see. This is the try it out. So I can load some file here. Scan. Let me see. So here the sample data and I load some sample file. NIR and I can say execute. And this one does also prediction. It just runs through an API. And you can see the prediction produces a clay content prediction. And there's I think five, five scans. So we get five values and for each prediction, we have again the mean value 25% plus minus 4.8%. So you finish with having predictions between 20 and 29% clay. This case 20 and 30% clay. So that's the API and API is going to be as I said, that's the central thing that we want to make. That's the most important because API means you can build up a soil phone up or you can make your own application or interface or even back end solution. And you can build up on top of this OSSL. And with this thing, John, I think I showed everything and now let's see the questions and I can still go back and demonstrate if I need to do something. So please, John, let's open the discussion. Yes, there's a question from Celebalo. Okay, thank you, Jonathan. I would just like to ask Tom where he was using engine. I saw that you have the option for using either KSSL or that one from Central Africa. So my question is, is it possible that maybe in the background, I'm not sure if we can be able to run them at a time and be able to compare which one suits better because like for myself, I'm from the certain expectation would be the soils would be more suitable for KSSL because they are in the temperate as compared to the tropics. So I would like to know if you have thought about that. Thank you. At the moment, the model is global. So we do use tree-based learners and we do have, if you have the coordinates, so if you know, okay, it's in tropics, that's the place. We do use the environmental covariates. So you can also hear in the engine, let me try that. So if I use geolocation, I have to add the geolocation data, but I could also add the geolocation and then when I use the model with the geolocation, then it will consider that you are in tropics. So that's at the moment the solution we have for people coming from different climatic zones. That's the only solution, but we do use a tree-based model. So in principle, if our training is correct, then it should be different calibration. You get different values if you have scan in tropics in Africa or if you have a scan in Europe or in Russia. So you should be getting really different values. And I'll just add to that. We are over the course of the next year. I mean, that's one of the big goals is really to move away from these pre-calculated, large global models and move towards a more flexible, kind of localized modeling approach so that you're just using for, if you have samples from just one region in Africa, you're using the most applicable samples in the dataset and see if a lot of times we found that, I mean, in the across the spectroscopy community, we found this localization of these large databases can help improve and often by eliminating bias in individual predictions. And so that's definitely something that we will be moving towards over the next year. There's a couple of questions in the chat, and then I'll get back to Andrew's hand. So Laura asked about resampling and pre-processing of spectra to align with the databases. Yes, yes. At the moment, we use the prospect R package, and we do two pre-processing steps. One is the resampling. So we actually three steps. One is the resampling, which you can see from the GitHub. If you go to imports, then you can see that we use this resampling and we use a bandwidth of two. So like let's say this is for the Africa, there will be a code. So we do this resampling from the prospect R package, and we use the length of two. And that's the one step that the second step is in the modeling. Before the modeling, we don't model with the raw spectra, but we model with the first derivative. So we derive the first derivative, and there's also nice function is in prospect R to combine the smoothing of the noise and derivation of the derivative. So you can also, if I go to here in the models, you can see the exact function we use. So that's this there, I think it's called. So that's somewhere here. Yeah, first derivative. So the function is called gap there. And that's a function that combines is like a wrapper function that you derive the first derivative, but you also smooth out the noise. So these are the second and then the third one is to derive the principal components. So after you got this derivative, then you derive principal components and we take 120 bands for modeling. So we instead of using 1700, we use the we use the 120 bands. And we played a lot with this PCA, we didn't know how many to take. But we stopped at 120 because I saw I tried 60, 120. Yeah, that's good time. I was just going to add as they're right now the first derivative is kind of first step towards harmonizing the spectra across the different databases while we're working through all these issues of calibration transfer. So I mean, in a lot of cases, this is first derivative transformation seems good enough to align, especially the MR data across the different databases. But obviously we have the ring trial going on where we'll get a lot more information about better ways of kind of harmonizing and making sure that the spectra are all aligned with the models. And I'll just add to that that doing the principal component decomposition is also just a way of speeding up the process data. We don't seem like we lose any model performance. And we've reduced the size of the database 10 fold. So it seems like a decent solution, especially doing the modeling in R so that we can get the predictions in almost real time. Let's see, this is several more questions. I'll go to Scott then Andrew next. So Scott had a question in the chat in the MLR routine, which statistical models are being used and are the predictions an ensemble or average multiple models? Yes. So if you look at the in the book, if you go to the using default models, then it's all explained, you know, so about MLR, this is an example of output. So this is this ensemble by stacking. I've been using MLR framework for now, maybe two, three years. It's quite robust. I know we know the group from inside and they help us directly. So this is the ensemble by stacking. So there's a meta learner. In this case, I just use a linear model because I assume if I use four nonlinear models, then there's no need to use the fifth nonlinear model. So I will fit a four nonlinear models, the random forest, the XGBoost, Lasso and the Cubist. And in this case, the Cubist sticks out. It's a bit more significant than the Lasso. And then the least significant is the XGBoost. And so the let's say random forest and Lasso and Cubist, they're kind of comparable. But in this case, for solagranic carbon is the Cubist actually comes up as being the most significant. And in this ensemble, as I said, it's a meta learner based on a five full cross validation. And we use the way possible, we use blocking parameters. So we prevent from overfitting. And I can guarantee you that we don't overfit. So we don't because of this MLR setting that you can you can use this meta learner with blocking. So we don't we don't overfit. Andrew, you want to ask a question? Yeah, thank you very much, John. Thank you very much also Tom for doing this kind of job within a very short period of time. It's really amazing. Although you've alluded to my question because when you are showing on the base learners for the models that you had fitted, I saw that you are including the principal components course, instead of working with directly with the spectrum. So what what now I want to ask, since you've said you said 120 principal components, was there a basis for select the 120 course? So we use the the first 120 components, right? Usually, when you look at the importance of components, you can see that basically after, you know, after 20 components, the the contribution to the variance of components very small. So I saw some papers and saw spectroscopy where they use like they use only the top 15 components. And we played with this variable importance of components. We looked at that. So there is something on that here you can see that these components this plot here. So we played with this is the variable importance in a log scale. And this is number of components. And you can see that it kind of what you will expect that as you go to the higher component, they become less and less important. But it's not it's not one to one. Of course, you see there's quite some uncertainty around that. And often what happens that the best component is the component number two or three. And then the component number one, it's like maybe third or fourth, etc. But so my estimate is that probably there's no need to go beyond 100 components. That's my estimate. But every every variable is a different thing, you know, so there's still testing to be done. But I just want to say Andrew, so nice to hear from you. We haven't been in touch. And I just want to say Andrew was the person introduced me to source spectroscopy. We spent a time here in Wageningen almost one month, I think he came from he came in the winter months. So we had to buy a winter jacket and everything. But so it's so nice to talk with Andrew. But yeah, so Andrew, thank you so much for introducing me to the field and also with the for helping with with building the solar economy data cube for Africa. Nice to nice to hear you at least virtually. Thanks Andrew. So I'm going to go through this bunch more questions in the chat. So the ball has another question about if the spectrum reflectance or absorbance right now for mid infrared, we're following absorbance kind of convention in the field and for DNIR reflectance. And hopefully we could make the import flexible since it's a simple conversion between reflectance and absorbance. But right now, I mean, Tom, you correct me if I'm wrong, but you need to import upload spectra as absorbance if it's MR is that correct? Yes, exactly what you said we use for the vision IR it's the reflectance for MR absorbance. I personally I I wonder why they have to it's just extra confusion. It's just extra confusion. But here's the formula I mean it's a transformation and simple I mean it's not it's not linear transformation it's a log exponential. But it's a it doesn't matter I think it's not it's not important as long as you don't mix it you know so but now if you use the engine, if you use the engine API specifically says for vision IR you have to prepare reflectances and for the MR you have to prepare absorbance is our code also by the way works our code works with the raw data. If you go to the book, you can see that the examples of working with the raw data so you can load directly opus files and the ODS files we have at the moment only two, but so you can load the opus file or the ODS files ODS files is reflectance and the opus files is absorbance. So you can directly work with the raw data. Yeah and people might have noticed just in the database we multiply absorbance values by a thousand just so we don't have any decimal points to save space save memory but we just multiply everything by a thousand so don't worry about the scale being different than what you're used to looking at. So a question from Patrick Smith, can you comment further about what you found using MR versus VNIR data sets so we haven't so we haven't done too much really interpret interpretation of the relative performance of different models yet one of the hard parts about doing that is they're different they're different databases and so it's we don't have the exact same there there's some subsets that are exactly the same between the VNIR and MR but we haven't tried just any models with those exact same subsets to say exactly what the relative performance is but that's something I mean as we refine these models I mean that's something I want to do is just kind of give a little better little easier to digest summary of kind of what we're seeing the patterns between the different types of models so like VNIR for organic carbon is great VNIR for some other properties isn't quite as good as MR and I my strong suspicion is if you had a locally specific data set it's going to perform probably better or worse depending on where the sample's a more representative I was just so for the VNIR the Lucas data set represents the majority of the scans so European soils are going to be extremely well represented in the VNIR space where not quite as much outside of Europe right now so I don't I don't talk you want to add anything to that yeah you see the distribution you see my screen right the distribution so yeah what the John said the the VNIR it will be biased for on Lucas the MIR it will be bit biased towards the towards the US so that's that's unfortunate but I have to say I'm really surprised how with VNIR your accuracy really drops and I'm using exactly the same models and same approach so I don't know how do you do that magic with the VNIR but at the moment it's very clear that there's a difference only a few properties you have reason I are compatible accuracy but otherwise usually I think it's drop of 20 30 percent yeah although I would say it's also like the performance in a model is all relative to what's the purpose for using this data and so for a lot of purposes you don't need the most precise measurement so a lot a lot of cases the VNIR data is still fine okay so just kind of going through more questions specter were captured from dried mixed samples yes Thomas they were so these are all lab based measurements so well processed dried for VNIR it's all dried and sieved to 2 millimeter and for the MIR they're all dried sieved and finely milled samples and then I mean the exact sample processing is going to vary between the different libraries that we used. Fennie has a question do you have information and performance comparison of models based on KSSL or on OSSL when new specter are within both feature spaces yeah so but so this is something we haven't really dug into too much yet I have three or four data sets that are kind of specific to different regions in the world that I really want to test all these models out on as a way of kind of starting to address those questions of does binding multiple databases outperform the KSSL or just the KSSL because it comes from one layup outperform kind of the combination of data sets but it's a it's a it's a bit hard to I would say we haven't really done the right test yet. Yeah so this is the summary and so here you have the R-squared RMSE usually sometimes I feel like the RMSE is the most important to compare and so these are different modeling approaches and I was hoping this bundle approach so vis NIR, MIR, global data and using so covariates and the depth and we got for example of soliconic carbon it's a bit worse accuracy than just using MIR but here you have 72,000 and here you have 17,000 so it's not kind of fair I mean I should run a comparison with also 17,000 for MIR and then see if the numbers are still like this because the accuracy with machine learning you know you get the better accuracy you have like 100,000 points so you have 10,000 points so that's something I will have to have to check but yeah I was in this case actually this variable I think log N it's a bit better so there's a bit of improvement if you do the whole bundle approach and so yeah sometimes it gets better sometimes it doesn't but it's comparable but with the vis NIR as you see you drop to 0.345 so it's like a almost double RMSE and so there is really a drop you know just using vis NIR and as I said that's for me a bit it was a bit shocking to see that the shocking let's say that the R square you know many cases it's around 0.6 0.6 for vis NIR based on a cross validation so it's a bit yeah it's a bit lower lower so then then when you do the vis NIR MIR then the R square again comes high to 0.9 usually 0.9 for me it's kind of a threshold to say R square 0.9 it means like okay this is something that you can do numeric predictions you know with the narrow narrow prediction intervals and the second part of Fennie's question was do we see performance effects and difference in the wet chemistry methods between labs and do we apply transfer functions to compensate for the method lab differences and so we haven't can I say something John can I say something here so yes go ahead so this data now I'm showing you all this data if you go to the OSSL models you can load the whole regression matrix so it's publicly available and you can just load it and you get about you get a huge table it's 150 000 rows with almost 3000 columns and you could now do the testing so this question you asked me if you really interested if you're into source microscopy go ahead and test you know different data sets different setups different subsets please go ahead and test it and let us know what you get so it is a public data set and it's available to do all kind of testing and you can publish a paper you can you can make a report you know this is open data so you don't need any permission from us the only thing you have to attribute that you use the OSSL this data set is a 400 imagine the whole source spectroscopy we could find in the world you can compress it to 500 megabyte basically that's it so you have everything at one as a one file so that's sorry John I just wanted to say that if you have questions about will this work better that please go ahead and test it yeah yeah Fetting I mean it's a good question as well and I mean we did the laboratory data we do have some harmonization steps if it's just if the methods are site are comparable but this results in different units just because of how they're reported so we do bind the wet chemistry data that way but it is a good question obviously it's one that we've been really concerned about is the quality of the wet chemistry data kind of it can be as important if not more important than the quality of the spectra in kind of degrading the quality of the predictions so it's definitely something that we need to keep looking you you published the paper John I think on that no you have a paper not on not on the wet chemistry differences just on the spectral differences but Leo Ramirez Lopez showed us in the webinar he gave he he did it was a theoretical example but he he was showing kind of the relative contribution of variance and spec noise and spectra versus noise and analytical data and it's about even the contribution to model error there's a lot of questions so that's yeah yeah yeah this is this is great yeah so there's another question uh as a large Brazilian spectral library are they contributing so we've been we're actually working with a little bit of the mid-infrared data that Jozel Demate has compiled we haven't been working with their vnir database yet but we've been talking and i'm the Brazilian group they've developed a really nice app and with a probably I think 40 to 60 thousand vnir spectra at this point and yeah we've actually been talking that we maybe in another year or so as they further develop their national capacity we might try to merge what they're doing with this effort as well but we are starting to work with their mir data so yeah it's be great to get more Brazilian data and just data in South America into the OSSO we have many conscious we have no data and as you see for the vznir we don't have any almost no data for Latin America also Australia also nothing there are some uh mir the global data we only have thanks to Isrik really and Iqraf because they did in 2014 I remember when Ben Bidramad was the director he also supported that they did that within I think it was within the global soul map project but they scanned this uh uh thousand scans or something and these are these points you see around the world these are the Isrik reference profiles but in in essence yeah the big problem of this project is that we yeah we have huge gaps and we have Europe and US overrepresented so that is a huge problem and the only way to go around it yeah is to call people and we we hoping we can get uh we we are not asking you know we know many countries they uh keep uh proprietary source spectral data somewhere it's almost confidential so we are not asking that now everybody gives out the data but it will be nice uh if your groups that you know that they if they could donate uh if if at least a part of the data like two percent of data they just donate um or if they if they cannot donate we are also okay to sign a non-disclosure uh data non-disclosure agreement and we only use it for data mining and we release only the calibration models uh and we leave it we of course keep the data safe um that's also a possibility but uh this is our hope to uh to get countries to contribute data uh John remembers I think in the case of Switzerland uh we asked for the uh a copy of the data and they said no it's uh it's impossible so they will never share that uh for you know I mean it's fine I mean we've been working I mean it's it's the these global compilations are already available are really nice to work with I mean obviously I mean that the Cres been working for a number of years and their source spectroscopy work the AFSIS database is really an excellent resource it's really nice that it's publicly available same thing with the Lucas dataset it's great that it's a public resource and really good and both these datasets are really high quality and I mean I would say it's really nice I mean since we released the OSSL I've been contacted by two different research groups saying they have some data that they would like to contribute and I mean one of the things we could definitely help if the data is not already in the repository I mean our recommendation is to get it in a repository so that you get credit every time anyone works with the data and I mean that's something that we could easily work with anyone that's in shred that has data that wants to work with and I mean I think as this project moves forward I mean we purposely just started working with some of these larger kind of quality control datasets but it'd be really exciting to bring in a larger number of smaller datasets to start filling in these gaps as well and that's definitely something we could help people with. But at the moment at the moment we are very vital and if you send us the data in one day we will import it so we'll harmonize it and check it we come back to you if you have some questions and then one day we import it and then it takes about a few days until it's impregnated and then hopefully the models we can update also like as I said overnight so it should go very fast if you send us the data you contribute to global good and you basically help people make better use of source spectroscopy. Our dream is I think it's obvious our dream is to really enable people to get primary sold data much easier so you don't depend on only commercial solutions and a log team solutions and on many companies they don't give you the raw data they don't show you how they do the calibration we are completely transparent open community project and we are something I tell people like a GDAL for spatial data we want to be a GDAL for soil spectroscopy data or maybe if you're not familiar with GDAL then we would like to be Android we would like to be Android for the for the what is Android for mobile phones we would like to be Android for the soils. Thanks so couple several more questions in the chat so Conrad asked that did you standardize between different labs to account for different instruments and so I think when we talked a little bit about this earlier right now what we're doing is primarily just using smoothing and first derivative transformation to try to bring the different libraries together which seems to work pretty good but we could definitely do better and that's I mean one of the objectives of this project do a better job there and the other part of that I think Tom mentioned that we're starting to use kind of just an instrument code as a dummy barrier so but yes John what you said it's this is just the beginning of course and we can only make more accurate models from now but we depend on the point data and we have to add more modeling options and we are also open if you create models if you do them so that it's open data and that it's documented you have to follow some minimum requirements we can also host your models and we will include them in the documentation and in a way we want it's not that we want that people use our models we don't care about it really what we care it's about its community and it's about having a easier access to sold data so if you make the models and if John and me and we see that you know it's really well done and it's a new way and you get a good accuracy then we will host your models and we will enable everyone to use your models through the API through the S3 etc. Scott had another question what procedure was used for outlier detection both in your calibration validation samples and in the unknown spectra so I think right now in our calibration validation and building the models we did not remove any outliers or did you Tom I know we discussed this but I think it's like this it's like this this is the this is the plot that this is the cross validation plot not the fitting the fitting plot is even more narrow and you see there are these points like a one observation which John says ah no this is probably some type or it's an error but I tried I removed this when you have like 100 000 points and if you have a few points sticking out let's say 50 if I remove them they have no impact on the model zero the model doesn't change so they this I think outliers they matter when you work with like a you know thousand points you have a thousand calibration points and you build your own model and then if you have two outliers they can have a huge impact but when you have 100 000 samples I tried it but I don't see any reason to remove this this this this points here for example this you know we predict tree and it we measure like zero in the log space so so so my experience is I don't see a need with the larger data as you go beyond like 20 30 000 training points I don't see any but of course if there's a typo if we made really if somebody made a typo somebody has a completely corrupt scan we do want to remove that so we we will program still further we'll program functions to try to detect not the outliers but to detect if they are really typos something like you know something really suspicious and artifacts and this will be removed yeah and then the second part of that question is really what when someone uploads new unknown spectra and this is something that we will implement a full outlier detection routine using either like an f-ratio test or just a distance-based metric but right now I mean Tom do you want to show that piece one of the pca plots again for me that one that you even that one you shared with me but right and right now we provide the pca plot just to show the relative representativeness of the uploaded spectra in the calibration set yeah here's the plot yes and so you can see this is 20 new points and in the in the background it's a 70 000 training points and and you see that these 20 points they don't seem to be in the center of the space so and then but all of the points nothing really sticks out but if I look here in the engine here's the engine and I was running this carbon so I'll wait I have to load it again so let me load my data I think you just have to pick the model oh sorry yes I didn't pick the model maybe people are also now logged in and they're testing it so let's see here I load my data and I pick up the model for the carbon around prediction and you will see this I think John you you prepare this data and you would purpose I think you pick up something a bit more extreme so there are these two points they are out yeah I think those samples are basically like Benrock so those are the ones that predict like almost zero carbon yes so so that's really off I mean so if you apply this model you should be careful with these two points but we warn people about this so it is visible and you can you can trace back the point so um yeah but definitely in kind of one of our next updates to this I want to add an extra column with the prediction results that gives you a quantitative measure of whether this sample is an outlier or not and that's something that is kind of standard in my own lab groups kind of workflow and we'll add it to this workflow but I mean you can just visually see right now I'm personally a type you know there's a in Netherlands they have a saying let the thousand flowers blossom so I think the most important is that you the models you make that you you know you do your best you do your due diligence so you remove the artifacts and you document how you fit it and then you expose the models and then and then you let the people I mean people eventually if they have you know 20 models for solar gani carbon you know maybe 80% of users will just use one model that's that's usually my estate but maybe then maybe different people will use different models but what we want to do as a community we want to allow people to do very easy easy conversion to primary soil data so you you take the soil spectra scan and you get let's say in 2022 now we have 16 properties but we would like to get about 50 so you take a soil spectra scan and you get your 50 soil properties and you get the confidence interval and the report and and then if you want to come back and update this in maybe six months or use another model go for it it's up to you and Tom there's a the last question on the list is I feel like you planted this question so maybe asked how how do we send our models to be included in your repository so if you just want to describe yes how to contribute as a model there is the whole section called register new model the the best will be to if you will fit the model with the OSSL data but you don't have to you can also just use the local data then what how we do it is that we dockerize it so we provide the docker we provide a docker copy docker image sorry and then you register you get the doi for your model and then once you have a doi you just send us so this is the model and this is the docker and then we just check it and if we check it after we check it we implement it in our system and every time people use your model they have to cite your doi or they have to cite whatever you provide and so that's that's the that's the plan but we probably I mean this docker image you know if some of you are new to docker you know this is how opens you have one of the dockers we have and so this takes time to set up a bit you know but once you set it up it's you know you can multiply it multiple times and and everybody can track because with docker you know you can see which libraries use which packages and everything so it's very detailed so that's the answer to that question and we yes we welcome the models we would very much like to host models but not you cannot just throw files on us you know this is not going to work they have to be they have to be meaning on diligence and it has to be open license but otherwise yes we can host the models and you know we will move this to a cloud solution I didn't mention that now we have all this stuff you see running on two servers plus our high-performance computing infrastructure but we will move it to the cloud solution cloud means you know we could have potentially million people using this every day and and we could keep it still a very low cost but so we will move to cloud solution in 2022 and that means that if you have a model you could reach a lot of people we could make sure that it's accessible to everyone also to developers and you could reach a lot of people so you could potentially get many people if you want attribution they will attribute your models okay well thanks so much for all the great questions and feedback I would say once again Tom you did a great job walking us through the OSSL and I would love to hear more feedback from anyone that's using it like what looks what's working what's not working for you we've made actually over the last two weeks we've made a lot of improvements in the model the modeling works a lot better than it did when we released it on December 5th and I mean we're going to keep working in the background and the nice thing is the front end kind of stays the same as we keep improving the models so the front end won't change but I would love feedback on the front end is it can you navigate your way around is there anything really confusing if we need to document it better all sorts of feedback like that would be great and you could that probably the easiest is just email any feedback you have to the soil spec for gg at woodwell climate email address or myself or tom and also if you have a if you find a bug the best is to put it in the under issues also if you want to become a starter discussion group please use a github just have in mind this is a public project so anything you write will be public and and if it's not relevant to the project we might remove it so so that's the other channel where you can post or start a discussion as I said there are people already asking about I saw that in these imports there was there was actually a it's probably like a bug and you see here and this is the best way to ask and then I can trace back and we can check it and then Fenni had kind of a good comment or thought to everyone can we see the contribution of a library to the model result when using the ossl would not make a difference so yeah if I understand that I mean I think that's actually a really neat way to like see like if there's a new library that someone has it's not in there can we edit and see how the model like actually look at the model and improvement before kind of binding everything together it could be a nice actually it could be a nice way to work to work with individual groups to see like especially unique geographies where it would be just kind of an interesting scientific exercise as well like like where when one does it matter like if we had 500 samples from a country that wasn't represented like does that make a difference for like overall global predictions or does it make a big difference for predictions within that region would be kind of a really neat exercise to go through and really also hopefully demonstrate the value of adding more data to the library yeah that was exactly the thought I was trying to think how can we encourage people to contribute data that was the backdrop of that thought and maybe this helps I just I really don't know if it helps but it might be something to consider and to ask to the community what they would like to see just just to generate the data yeah that's a great idea it will be nice if Glossola and the FAO will give awards to organizations that contribute to open data science projects I think that will be symbolical you know because we we are also low budget we cannot afford too much so I think maybe that's let's let's make the champions of source microscopy open data and maybe that's that's the way we can motivate them Fanny what do you think I will be completely in favor I'm really in favor of open data and to stimulate all initiatives that that make that happen and to really not just on not just judge on worst but just on actions and in that way I want to already said it in the chat that I want to stress I think it's amazing what you guys put up in a short amount of time it's a great start and sure there's a lot of things to explore and discuss later but you have something working it's it's a great start and I think the community can can build on this and help to to improve so I'm all for Tom I don't know if they listen to me but that's that's another thing okay well here's the tip is one more time make the champions of open source spectroscopy data and promote them and I think these people should get you know encouragement and and they should be celebrated really anybody any organization any institute and I'm talking especially about the data is already paid by the governments you know the many governments they already paid this hospital's to be data so it's not that they have to sell it or something and so it's just about making that step being a bit brave and making that step and saying okay we give you we give you the data and if the one or two organizations come then the others will follow so you just always need you know with any any of this project like in this case I think really the OSSL now it's built around the USDA and and the work of Rich and Scarlett and the KSSL because that's quite all started they said okay we give we give the data we'll make it publicly available and and now they they made that first step and then Lucas also data available and we spoke with the GRC and then we had to get their blessing to put their data online we had to we had to shift the coordinates of the points because the coordinates in Lucas about one kilometer randomly moved because otherwise we're not allowed to release it but so it was the KSSL then the GRC and then of course AFSIS has a lot of data and they released recently they put the GANSIS and TANSIS and NISIS they put also online so that's really the three I think big organization to start then ISRIK ISRIK's the ICRAF ISRIK Global Spectre Library also makes a big difference but yeah at the moment it's a it's a small amount but they are champions now we just need more champions okay I think that's a great note to end this meeting on thanks everyone for coming this thanks again for everyone that's contributed data and contributed time to this project and I couldn't see Keith but my best regards to Keith lots of ideas coming from Keith he's really the pioneer of source spectroscopy and I just want to tell him that these things we discussed now we're doing it it's happening so I'm happy about it yeah and I'll just say yes happy holidays to everyone have a great break if you're taking time off and we're keep working on this and we'll keep reporting back and I'd love to receive feedback
Tom Hengl is the Co-founder of OpenGeoHub foundation and Technical Director. In the SoilSpec4GG project, Tom leads the development of the OSSL DB, R packages and web services. In this talk, Tom reviews the novel Open Source Spectral Library (OSSL), consisting of multiple interrelated components. The first is a large database consisting of several continental and national collections of spectral data in both the visible-near infrared (VNIR) and mid-infrared (MIR) regions of the electromagnetic spectrum, as well as traditionally measured soil properties. These data can be accessed and visualized through the OSSL Explorer or directly worked with through an API or by downloading a snapshot of the entire database.
10.5446/55636 (DOI)
We still have a stream of people coming in and I assume this will continue for a while but we may as well get started. So thank you all for joining us. It's nice to see a handful of familiar faces and nice to see a lot of unfamiliar faces and names on the participant list. So before starting, a few words of thanks to the various people and things who made this possible. So first of all the coronavirus which is the reason and not in other summer schools. And a word of thanks for the organizing committee which apart for me includes Luigi Adario Berio, Lijepojevic and Alex Freiberg, Matav Morgan and also Sarai Hernandez Torres and Thomas Hughes. And in a way it was Sarai's original idea when we realized we couldn't have the Vancouver Summer School to try to move various activities online. So thanks to all of those. And now as you can see to this week we have a course by Jean-Christophe Murat. Next week there is a course by Gadi Cosma and we will continue later on down. So you can see on the website the list of future courses that we will have a quick word about logistics. So we have the chat here in Zoom and you can ask questions in chat. So most of you cannot unmute yourself but if you ask a question on the chat we will either unmute you or allow you to or one of the moderators will ask the question to the speaker. So please do try to ask whenever anything is unclear. Finally we have the Zulip forum and I see a lot of people are online there as well. So the Zulip forum I will also be monitoring that during the talk. If there are questions there they will be a flagged for Jean-Christophe. And if there is anything and the advantage of the Zulip forum is that it remains active also after the meeting and this can be used also to discuss various questions that arise from the meeting or from problems that might be assigned in the meeting. So the Zulip forum is there for you all the time. So I think that's a few words about the procedures. So I will pass you to Luigi now to introduce our speaker and we can get started. So again welcome everyone. Thank you Luigi. Okay, thank you very much. I'm having my own technical difficulties at this end unfortunately so if my signal is weak I apologize in advance for that. So I will just briefly go over the plan. So we will have roughly a one hour lecture which our plan is to divide in half with a short question period and break after half an hour followed by the second half. Well then stop the recording and have a second question period after which participants who would like have the option of joining breakout rooms to introduce themselves to one another in smaller groups and discuss the lecture. We'll ask our lecturer to stay in the main room at that point so participants who prefer to stay in the main room and ask him questions directly can do so. Let me also remind you that where this lecture is being recorded and streamed to the web and will be made available on the web after the lecture so if you don't wish to be recorded it's best to turn your video off and mute yourself. One further point in order to try to stimulate a bit of conversation and dialogue we're going to select some random participants and unmute those participants or rather give them the power to unmute themselves. So if you discover that you've been made a co-host that means you've been randomly chosen and we in that case you're welcome to ask questions during the lecture. Having co-host privileges also means you can do things like kick people out of the meeting accidentally and whatnot so please be careful if you've been given such privileges. This is a bit of a trial run we'll see how it works. If it does hopefully it will foster more of a dialogue and a bit of a feeling of interactivity in the lecture. So with that it's my pleasure to introduce Jean-Christophe Mourin from New York University. Jean-Christophe received his PhD in 2010 under the supervision of Pierre-Matthieu and Alejandro Ramirez. He then spent three years at EPFL in Lausanne followed by six years as a research scientist with the CNRS based in first UNS de Lyon and then UNS Paris. In 2019 he moved to the Caron Institute of Mathematical Sciences at NYU where he's currently an associate professor. So JC it's a real pleasure to have you launch oops with a mini course of three lectures on rank one matrix estimation and Hamilton Jacobi equations. So I'll now invite you to share your tablet and start your lectures. Thank you. Thank you very much. So let me try to share with my screen. It's a great pleasure to be here with you guys. I'm really excited to make this experiment of this online seminar school. Yeah, I hope we this will be a successful experiment. I hope we will have good conversations about the topic. I changed the title a little bit but it's the same. It's just that I will speak about my motivations which are a bit border so I slightly rewarded the title but it's really what wasn't asked. So thanks to the organizers in particular for making this possible. So I hope that the people who became co-hosts are aware of this. And let me say one more thing which is for those who can unmute themselves directly please feel free to interrupt me because there's no other way for me to know you want to speak right here. It's hard for me to see if you're raising your hand or something so just stop me. All right. The sound is good. Can you hear me okay? Yes. And I'll try to for everyone also you can ask questions in the chat as was said. I'll try to look at it but also I hope we will help me or other people will help me to see what's going on in the chat if something happens and I forget to look. All right so let's start. So yeah I want to spend a bit of time describing some of my motivations which are a bit beyond or maybe different from the main focus of the talk. And to give you a little story around the motivation we can imagine that it's like in the high-poters series where there are students who arrive at Hogwarts and you know when they arrive they are supposed to be assigned dormitories and in the story there are four dormitories but for simplicity I just assume there are two dormitories and the students are supposed to put this sorting hat on their head and then the hat decides where each student goes into you know into which dormitory each student is supposed to go. So I'm going to start from this and make a very simple model of that situation and see where it exists. So let's say that so we have these students which are so there are capital N students which are labelled from 1 to N. These are their names if you want and let's say there are two dormitories and an assignment of students to dorm will be encoded by a vector of plus 1 and minus 1. Okay so for instance if the vector starts with a plus 1 it means that student number 1 was assigned the dormitory plus 1 and otherwise it's the dormitory minus 1. And the magic sorting hat is trying to optimize the quality of the assignment and you know in the model I'm describing we can imagine that there is a parameter that describes the quality of the interaction between students i and j. Okay and again this is a you know it's a type problem it's a simplified problem so I'll just assume that these random variables are independent standard calculations. And you know we are good people so we want to maximize the quality of the interaction. So by quality of the interaction I mean if j and j is positive and very large it means it's really you know individuals i and j really like to be together and if it's negative and very large means they really dislike to be together. And so what we would like is to is to maximize the total sum of the interactions in each of the dorms that we are trying to form. So it's the mapping that to each assignment sigma associates the resulting total interaction so it is the sum of j i j indicator of the students i and j are in the same dorm which here is included by the indicator that sigma is equal to sigma j. And so you know one question you can start to ask before even trying to identify the optimal sigma is what is the behavior of the maximum of this function. So what is max over sigma how does it behave when when n becomes very large this is the first question I want to ask and you know to to be more consistent with statistical mechanics standard like setting instead of writing this indicator function of sigma is equal to sigma j which is here. I'm going to write the product of sigma sigma j if you think about it is just a constant shift between the two quantities. So so it's it's okay to to do this change. So I want to understand this as n becomes large. I want to understand the asymptomatic behavior. And so for those of you who who are familiar with the easy model the easy model is a bit sort of the same we have a sum of sigma i sigma j except the this capital J is they are all equal to one and in this case you know for the easy model is very easy to figure out what is the best possible configuration we would just have every sigma to be equal to plus one or every sigma to be equal to minus one. But here because the j's are are disordered because some are positive and some are negative it's not obvious what is the best configuration if if you have three individuals that say i, j and k and i wants to be with j j wants to be with k but k really doesn't want to be with i. There's no way for you to make each pair maximally satisfied right so so we could say that there are frustrations in this problem. And this is really a signature of what this is called the glasses glasses. So so these models are often called the spin glasses in the literature and okay but okay is the important point is aspect of frustrations. In other words it is difficult to find the optimal configuration and if you think of it as trying to do an optimization procedure where you try to adjust each sigma one after the other then you know sometimes maybe to find a better configuration you will have to first make a move that looks detrimental and then maybe another one and then ultimately it will improve the thing but it's difficult to find the optimal. I was looking at the chat so far so good the people in the chat you're happy. Yeah there haven't been any questions aside from technical issues. Awesome okay yeah I certainly realized that there was a big stream of messages which I haven't seen. Yeah so so the maximum is an interesting question but I want to relax it and and also look at the J's. Is the J-J symmetric? I don't assume it to be so but because it's more convenient for analysis but in fact it only depends on the symmetric part of this matrix of the J's. So I don't assume it but it's not crucial. Thanks for the question. Okay so I want to consider a softer version of the maximum also which for people who have done statistical mechanics before will be very natural because it's like I want to look at the Gibbs measure associated with this function. So for instance I want to look at so okay I put a normalization here and for now you will have to believe me that this is a reasonable normalization so I put the parameter beta and then this one was called a V and I'm going to sum it over all sigma and you see that if beta is very large then this sum is essentially carried by the maximum. So if then I take the logarithm then you know in the regime of very large beta this essentially is like the maximum but then you can play with beta and it's like a relaxation of your problem issue. So I did not explain this but the way I normalized it we should expect that what's inside the exponential is over the N so if I want the thing to stabilize as N becomes very large I should divide by N and finally this quantity is still random, it still depends on this J's but the fluctuations are not very large so you should not worry about this and just to kill them I'll take the expectation with respect to the J's. And so now okay this is associated with the model of statistical mechanics which was introduced by Schoen-Tunne and Keppertrick a long time ago and it was not related to Harry Potter initially and it was initially not very clear even what the limit is and ultimately Parisi in the late 70s proposed an answer for what this becomes when N goes to infinity and I'm not going to write it because actually it's a fairly complicated formula. So okay I mean you know the main emphasis of the course is not on this model specifically so I don't want to spend too much time but still I want to mention this because for me this is also an important part of my motivation. There is this formula which for me is very difficult to build an intuition about and also I like it very much I find it mysterious in some sense because you know the expression on the left side okay it took me you know it's kind of long to write but deep down I find it very simple you know it's just you have this function of sum of J, J sigma, J sigma, J is very simple and basically I want to look at the maximum or maybe some you know relaxed version of this you know if you think about a disordered high dimensional problem it's hard to think of something simpler and yet the right hand side is very complicated and reveals a very bizarre structure and I find it very difficult to understand what's the relation so I thought it would be interesting to think about it more but yeah let's just give some references so this formula was proposed by Paris in the late 70s and progressively if this became more and more convinced it was correct over the course of time and then Guérard and Telagore each for one bound proved it rigorously okay so in fact I think about I like to think about it I find it an interesting problem but I want to stress that it's already proved and part of the reason I find it mysterious is oh yes yes sorry I forgot to mention the sum inside the exponential is over the i and j between 1 and n yes thanks for the question yeah so you know I think for me what really revealed the fact that I don't understand this formula is that I wanted to to think about a slight variation of the problem where instead of having connections between each of these i and j's I was imagining maybe you know what about the these things are organized in two layers so you would have a situation like this okay these are the I draw circles for the i's and j the i's you know it's like from one to n but now the maybe it's from one to n over two is the top layer and and the rest of the bottom layer and then you know what what happens if we only have interactions between the two layers like this but not but not interactions within the layer okay no no interactions like this and it seems a relatively innocent modification but it so happened that I could not understand that what I was supposed to write in place of the complicated formula which I didn't describe and so yeah I found that was a you know a good sign that I actually should should think more about this problem and so I will stop speaking about this this model so it's called a spin glass model and you know in case you want to hear more about it there is a the talk I gave at the one world poetry seminar on this but so so now that was you know part of the story of the motivation but now I'm going to try to move to the problem that I actually advertise the in the abstract which is the rank one matrix estimation or maybe I should say matrix inference so so it's the setting is is that of a statistical it's a statistical question if you want you you have a you're trying to observe something and it's unfortunately you only observe a noisy version of it so so you we observe the noisy version the noisy version of a rank one matrix and the question is can we recover information about this rank one matrix so so you should imagine it's a very very large matrix and so you know I don't claim it's a I don't know if the settings actually use fully in the setting I'm going to describe but sometimes it's useful to have a concrete you know in sensation in mind when we think about the problem so if you want to think about some concrete setting you can think that you are Netflix and you're trying to understand how to make recommendations for for your customers and so you have a bunch of customers and maybe in a simple model of the situation you could assume that a movie is only you know like whether or not a person will like a movie will be captured by will be captured by a few parameters that describe the movie and a few parameters that describe the customer for instance another movie maybe whether or not it has more action or more introspection or maybe is it sad or joyful you know maybe there are two or three aspects that you care about and maybe there are two or three aspects of the customer that describe how it reacts to how they react to to the characteristics of the movie and ultimately the you know how the person likes the movie is some linear combination of these parameters so so then you will see that I'm trying to build a very big matrix but which has a very low rank it's only a few parameters that vary and and so so as a as an extreme simplification of the problem you can assume it's just a matrix of rank one so so I'm going to describe this setting with rank one but it's not difficult to generalize it to higher rank problem okay and maybe another setting in which at which you can think is if you try to do community community detection problems so for instance in the US the situation is the political situation is relatively polarized so maybe if you want to guess whether or not two people would be friends there is a binary viable which if you know it it will indicate you know it really for us a lot whether or not these two people will be friends and so maybe you observe the link of connections and you want to recover these communities and if you think about it it's also some sort of instance of a you're observing a noisy version of a rank one matrix so you know you could you can imagine that each person has this binary variable attached to them and then whether or not two people will be we will want to be friends it will be very influenced by the product of each of the variables for each one yes so these are two settings that you can think about let's say I will not come back to this more applied aspects later but just to to give you motivation alright and the the common thread between these these two problems is that's I mean the the thread that I want to emphasize in this lecture is the their relation with certain partial differential equations which are called Hamilton Jacobi equations so yeah I don't know if everyone connected to a Zule but as I said in Zule I don't assume that you know these PDs beforehand right like the basically the point of the lecture is to get familiar with how they work and how we can use some techniques that are inside this context to solve this one-quad matrix estimation problem so so I find it interesting this point of view about partial differential equations because I think first of all it allows us or at least it allows me to better understand the nature of the difficulties that arise and also if you if you come with this intuition you know this intuition comes with a lot of tools and techniques and ideas that were devised for for studying these equations so it's it's like you you know if you say this keyword basically it comes with the with the toolbox right and then you can see what's in this toolbox and maybe some of the tools are useful and maybe maybe it's also yeah I'm not sure about that but but for the spring glass model it's possible that in some cases at least right now that I find it possible that in some cases it's maybe unavoidable to speak about these equations but I don't know all right so I see some more message in the chat so so the the rest of the lecture will be divided into two parts and the the first part of the lecture will be on the Curie Weiss model and the Curie Weiss model is really an extremely simple model and it can be solved in in a very large in really many many different ways but I want to emphasize a particular way to solve it that uses this intuition with Hamilton-Jacques-Boubier equations and and I want to spend really a decent amount of time really doing it well the point of this is that next when we're going to turn to the problem of rank one matrix estimation the proof will be really almost the same so you know maybe you're not passionate about the Curie Weiss model because you think it's too simple but but you know in case you're in this category please consider that you know the ultimately the proof for the real problem I want to solve in this lecture will be solved using exactly the same technique so I think it's important that we understand what so yeah in the so so the way I'm going to try to understand the Curie Weiss model is not standard and if you want to see a more standard way to to solve it I recommend the book of Friedli and Valenic which I also someone gave a link already in the Zulep chat so you can have a look there and maybe it's a good time to take a few questions and then maybe also have a little break and then I move to this first one so are there questions on the on the on this kind of motivations or so the reference was a Friedli and Valenic I'm typing it the title of the book is a statistical mechanics of lattice systems it's available on the internet and also it's thanks for the link it's also a good introduction if you want to you know know why why statistical mechanics do things in these ways like why do we look at these measures of things so okay I'm not sure exactly how to interpret this question about motivations but I'll try so so for instance what what we the I'm not sure if I will have time to fully explain this but one question we would like to ask for instance is can you actually recover meaningful information on your rank one matrix and this will depend on the strengths of the nodes but it will depend in a way which is quite striking if you're already familiar with the EZ model you know that there is a phase transition between this disordered phase when when there's like no organization in the in the magnets and this organized phase where there is a magnetization happening and for this inference problem there's also a phase transition of this sort sorry which means that so in this context when the signal to noise ratio is is too weak you will not be able to recover meaningful information on the rank one matrix I will call it the same so you will not be so there will be a regime of signal to noise ratio which in which basically you cannot recover information on the signal and then after threshold suddenly you you start to be able to to recover at least partial information on the signal and if you want to understand that that aspect the study that we're going to do essentially will give that I've made you a sort of question does that satisfy you or you yeah so Adam asked can we fix the number of plus ones and minus ones I think I think it's possible yes in general you could even imagine completely changing the reference measure so here it's the way I presented it you could think of this sum of this sum over all sigma's it's a bit like averaging over Bernoulli measure of plus one minus one with with with probability one half each right but you could change this reference measure by something else but you could I don't know for instance it could be maybe maybe you're not only interested in in sigma's that are valued in the set plus one minus one maybe they could be uniformly distributed and you still want to understand some of j j c minus imagine and yeah again okay so I think you can I mean now that I understand this Hamiltonian Jacobi equation point of view I I can see how you could also do it with the other point of view as well but for me it was useful to understand how to generalize it to arbitrary measures and okay so so the question you're asking is if we fix the number of plus ones but this can also be encoded as a changing the reference measure except that in the examples I was giving there would be product measure on each coordinate and here you're imagining another type of constraint which is you would want to look at the uniform measure over all configurations that have the same number of plus ones and minus ones but yes so yeah I gave a very long answer but long story short I think you can change this and the way I would understand this is this amounts to changing the reference measure for our plus one minus one set so yeah I should I should try to give a shorter answer yes so okay so so there are two questions the second one is a bit easier so I started with the second one so so someone says that it's not clear what's the connection between the the rank one estimation problem and the spin that's problem it's a bit normal but we'll see the connection very rapidly when we start to study it's like when you we will want to study the the law of the of our signal conditionally on the observation and this it will be we will be able to write it as some sort of Gibbs measure or some form and the form will be fairly similar to to the one of the spin glass case you know it will involve some some of some noise matrix times C my C my j type so so okay that's that's my answer the parameter b time in my free energy yeah so let me show this so so yeah someone asked what's what's this parameter beta so so in the sense cal mechanics understanding this could be understood as the inverse temperature and in more pragmatic terms it's this parameter that allows you to bridge between you know if you said beta equals zero then there is no complication you're just summing over this this reference measure like this this I like to think of this as this product of Bernoulli's so this you understand perfectly well maybe if you care about the maximum you you really want to insist on sending this beta to plus infinity and in between you know there is there is some more flexibility and yeah and inside scar physics language you know beta very small is very high temperature and beta very large is very low temperature all right so I'm kind of I'm just gonna show now how much time do I have I had time for a three-minute break yeah I think that's Regis so this thing is for a minute break I think that's a good so let's thank you let's hear me let me just comment that on Zulip so we are all new to the system but if you don't see the stream you need to subscribe subscribe to the stream so there is a link to the stream for this course posted both in the general stream to which everyone is subscribed and also if you scroll up the chat here there is a link to that so we'll try to figure out later a way to have everyone subscribed to the stream so that you can see it and the reference to the book is also there now all right so maybe I should restart is that fine that's good awesome all right so so now for this first part as I said we are going to study this Curie Weiss model so let me define it we want to to study the probability measure that to reach configuration sigma in this so I write this plus minus one I mean you know plus one comma minus one power of n associates the weights proportional to exponential of t over n sum of sigma i sigma j so you can think a bit as I have set all these j couplings to be equal to one okay and then we will see in a second but the normalization with one over n is reasonable and for some reason I like to change beta and call it t which is a little bit confusing because here this little t is really an inverse temperature not a temperature but okay we'll have to deal with that so if people are familiar with the standard easing model you know the easing model you can think of it on on any graph if you want and this would be the easing model on the complete graph and I also want to add an extra term which is a magnetic field so it's it's a one there's no interaction on this one it's just going to be a tilt on the sigmas if you want it kind of encourages the sigmas to be on a particular position okay so I want to study the the probability measure which to e configuration associates the probability which is okay some constant times the the factor which is written here so the notation I will use is uh with this bracket so if I write f of sigma like this with these brackets so it should depend on on the choice of these parameters c and h but I will often forget to to write the these these indices but okay so it's the the sum over sigma of f of sigma times the exponential of that that thing above okay which I I don't rewrite I just write dot dot dot dividing divided by the normalization parameter so the sum of exponential dot dot dot okay so is the definition clear this is a not necessarily so so the question is is there a sign for for t and h and nowhere here there can be any uh yeah I think of t as being positive sorry yeah t is positive but uh or non-negative but h can be positive or negative you know I also allow for um you can insist that you want the sigmas to be negative if you want okay so so if if h is positive it will encourage sigma to be plus one and if h is negative it will encourage the sigmas to be minus one so I yeah so so t is positive and h is a rebatment thanks for the question okay the the definition of the probability measure is is clear so I want to understand is there an interaction in the first step um so yeah so this is yeah I have a bad habit of not writing the indices so this is the sum of i and j between one and n so if you want there are interactions in this thing because if you want to know you know like sigma i is talking to sigma j in the sum you know like the the this quantity inside the exponent shell depends on on not just on summing what each of the sigma i's is doing but on on the product of them that's why I call interactions that they so so you know by contrast the this this sum over a sigma i is is not with no interaction it's just a stupid sum of sigma i's basically I just mean it's a linear sum while the other one is not a linear sum all right so so the way I the the thing I want to study first instead of focusing on this measure is the following quantity so basically I want to study the the normalization constant in this in this ratio and you know just like I'm going to get there was a question in the chat about the sign of t and h whether you oh yeah I think I answered that question yeah he's positive and he's a real no thanks for but thanks for yeah it's good that you check because I try to look at the chat but sometimes it's and another question whether the sum is overordered or unordered pairs yeah so I sum over all pairs so that answers the question I mean yeah there are n square terms in the sum I guess that's the I think any other way I try to answer it would be will confuse myself so yeah there are n square terms all right so so I want to see this quantity and you know perhaps you think so so you see it's just the the normalization factor in in the in the definition of the measure and then I you know I expect that this thing in the exponential will be over the end so I take the log and I divide by n and I just give you the name and you know perhaps your first reaction to this is this seems to be missing the point I don't want to study the normalization constants I will study the measure we don't care about normalization constants but I think it's not the this is a bit misleading because really what this quantity is doing is that it's looking at the it's really the moment generating function of the quantities you care about you see it's it's building this so if you think of this as some some measure you know up to dividing by two to the n it would be a probability measure then you're really computing the exponential moment generating function of these of these variables and if you understand the moment generating function of these variables you will understand these these quantities well okay so I will answer the question in in a second so yeah from a physics point of view it's natural to have what's inside the exponent shed be over the end and so yeah so I did not explain the normalization so this will be over the end right it's a sum of things which are over the one and this will be over the n square so if I divide by n it will be over the end again and so then it fits my request of what's inside the exponent shed being over the end and you know from a physics perspective it's natural it's like an extensive quantity in the side of the system but also you see if you want to have an influence on if you want your measure to really affect some tilt on each of the variables individually you do need something over the end inside you know like this this parameter for instance we will produce a tilt on each of the variables and if you if you were not scaling in that way it would not produce such a such a tilt okay all right so so I want to do this function and I promised partial differential equations so what I want to do is take derivatives yeah in the first example there was a one of us called aven because the the quantity in front was random so it was centered so you have to think harder about what's the what's the actual contribution of the best sigma and stuff so so it was a more subtle scaling so so here it's it's simpler it's just the deterministic scaling is called so so let me take derivatives of this quantity and see what happens so so this is a derivative with respect to H so H is here I have a logarithm here so when I differentiate I have the ratio of the derivative divided by by the function itself so first I have to look at the derivative of this thing and when I differentiate with respect to H well there's what's whatever is in front of H which will you know come out of the exponential and be brought here and you see that it has a ultimately it will have a form which is similar to what is written here except that this f of f of sigma will be sum of sigma i right so so when I differentiate with respect to H and so there is this one over n which stays so one over n and then ratio of derivatives so sum of sigma of sum of sigma i exponential of blah blah blah divided by sum of sigma of exponential of blah blah blah okay so so I already said it but you see that it's really oops oops it's really of the of this form here right it's really of this form with this ratio of two things so so this is one over n the expected value you know the expectation of the sum of sigma okay and if I do the calculation with the with the t derivative then a very similar thing will happen because I have one over n in front and then whatever is in front of t which in this case is this sum of sigma i sigma j will show up in the in the expectation so so there's still a one over n in front and then sum of sigma i sigma j okay does do people agree with that derivation so so yeah oops sorry this is a sum of all i and j between between one and n and because the Kulivice model is so simple this sum is really just one over n sum of sigma i everything squared okay so so that's that's where the the simplicity of the model comes really at the forefront is that I can rewrite this in a very simple way just as a function of this sum of sigma i so again we see that the normalization is probably pretty good because we see that the the derivatives of this function are over the one you see that's like this one is between minus one and one and this one is between zero and one and moreover if you're if you're a bit optimistic you think that actually the this t derivative is the square of the h derivative okay it looks like it's so um so it's not really true but it's a it's a good starting point to to notice that because what we see when we when we look at the difference between the t derivative and the h derivative squared what we find is that it's it's the variance of uh that seems to pop up it's the variance of this quantity one over n sum of sigma i okay I would call it the mean magnetization it's the variance of the of the magnetization so if you write this one over n sum of sigma i squared minus one over n sum of sigma i squared okay so so I mean we we don't know yet how to really make this rigorous but you see that what's on the right side is I mean what's what's what everything is rigorously but what I'm going to say next is more at the level of the intuition what's on the right side like this quantity itself is of order one and what we're writing is is the variance of this thing so you know if you believe that there are some stochastic cancellations in your problem that maybe the variance of this thing will be of lower order so we start to fill some hope at least that when n becomes very large you know this function fn almost solves an equation of this form with basically zero on the right hand side you know something kind of small and the function you know as n becomes very large so so the next thing I want to do is try to make this rigorous okay so so how can we kind of justify that this thing on the right side is actually sort of small in some sense and therefore in the limits this function fn solves the equation with zero on the right so so I need to find a way to understand the variance of this sum of sigma i and before I said that our function fn is the moment generating function of our variables so in particular is the moment generating function of sorry I mean I mean this this one is the moment generating function of sum of sigma i if I look at it as a function of h so in particular it should encode the the variance of this of this variable in some way right so if I if I if I study fn carefully enough I should find a way to express this variance in terms of fn I just need to find the right way to pull it out but it should be there somehow and you know maybe looking at the second derivative of fn will be a good idea so so let us try to do that so so I'm going to slide up to so that we see the h derivative it's it's here right so so I'm looking at this at this expression at this this ratio and I'm trying to differentiate one more time in h so I wrote dot dot dot but inside this dot dot dot there's an h right so when we differentiate in h the first thing is we will have to differentiate that occurrence of h and then there's also one somewhere here right so we'll have two terms so let's focus first on the first one so this one here which I have not written it's it's again h times sum of sigma i so if I differentiate again what will happen is that I have an extra sum of sigma i that that goes out and and chose as a square here right and so it's again an expression of the form maybe I'll erase that of the form one over n and then this average of sum of sigma i and then everything's squared okay so that that's the first part and then there's the second part with the with the the h that is somewhat hidden here right I need to differentiate this one as well and in this case what will happen is that so so when we differentiate this one we can think of the numerator as being constant and we're just differentiating differentiating this denominator part and it would be you know the result would be the derivative divided by that quantity square that denominator square so we will have you know this will come to appear twice and this will be squared at the end right that's that's what happens when we differentiate this term and so and so what it is is one over n average of sum of sigma i everything squared okay there's a minus in form okay so so there's a bit of a magic it's actually very directly directly related to the bias because up to I think I messed up a skating yes the one over n is outside the square so so it's really very directly related to the variance because it's you know when you compare these two expressions you see that that they they are the same except one of them one of them has a one over n extra so so what we have shown is that dt fn minus dh squared fn is one over n times dh the derivative oops sorry I made a mistake again here I meant the the square of the derivative and here I mean we differentiate twice okay this is my notation for differentiating twice and this is the square of the first derivative this this is the square of the first derivative okay and this is a this is a very very important observation because now everything is expressed in terms of fn so if we want we can completely forget about the our starting point and the study of the of this probability measure and what's the definition in terms of probabilities and just think about what is the function fn that satisfies this equation and and just try to understand what happens when n becomes large for a function that satisfies this this this equation okay and it also gives a further credibility to the idea that as n becomes very large on the right hand side you know this this right hand side we vanish because there is one over an in front and probably you know we're not so sure how to bound the second derivative but maybe still it will disappear in some way and yeah one last observation I want to do about this function is that also so so you see now why I wanted to call this parameter t is that now I think of this as a as an evolution equation in t right I think this is I think of t as time basically it's like the partial differential the the derivative in t of this function is doing some some things and so it will be useful also for me to understand what happens when t is equal to zero what is the initial condition and so let me write this so I rewrite the definition when I set t equals zero so it's it's one over n log of the sum over sigma of exponential of h sum of sigma i okay so it's a sum over i between one and n and you see that now nothing's so so I guess maybe it's it's also one way to understand also why I call this non-interacting because this you see it can be factorized like this is exponential of the sum over i and so I can write it as the sum of the product maybe I write it log sum of sigma product over i of and then you can you can factorize this expression so it's you know it's a sum of products etc and and once you have factorized it you see it's just one over n log so maybe it's so it's one over n log of the product of each time the same expression that is repeated which is exponential h plus exponential minus h everything to the power of n so in fact it does not depend on n you see the the power of n it cancels with the one over n outside and so what's really important is that it's it's it's kind of very easy to understand the actually psi of h because it's it's very very easy to understand what happens when t is equal to zero in this problem okay so so we understand perfectly so I could write explicitly what is psi you know it's log of blah blah blah but this I don't want to to stress too much I just want to emphasize that we understand perfectly what happens at t equals zero so yes there's a question in the chat about generalizing to a case with not all-to-all interactions so yes so a big aspect of what makes this possible is indeed that the thing is is kind of a mean field where there's very little geometry if you were studying for instance the easing model on a lattice then things would be more difficult I think you would not find a way to close the equation the way I did although yeah so there are there are papers in the literature that use ideas like this to get bounds on on some aspects of the easing model for instance on the lattice so so you cannot close the equations in general but maybe you can find inequalities and these help you to understand the model even though you cannot write closed expressions but yes indeed the fact that here is kind of all-to-all interactions is what really allows me to close the equation and that's really that's what will allow us to completely understand the problem in the end of the day okay so so you see now what I want to do is explain what we do with the two things that are squared in red okay we have if you want we have made a first step where we're studying this probability problem and and we we derived these two identities and now I claim that from these two identities we should be able to understand what this function fn is doing in the large L limit it becomes a completely self-contained problem about a function that satisfies this this thing and that has a simple initial condition and we should we shouldn't understand you know what happens when n becomes large and before I move to that and describe this I want to stress one more thing which is in some sense the the most important connection between this function fn and our understanding or you know our heuristic understanding of the system and it is that the the h derivative of the of the function fn is the mean magnetization okay so this I want you to remember just as a way to interpret whatever we find about the function fn later on okay so it's so the derivative will indicate for us what is the mean magnetization of the system um so um yeah maybe I'll take five more minutes to to just describe a few things about this and and tomorrow we will we spend more time making things very gross about this um so let's say it's a second section and it's will be some interlude okay so so I want that we that we you know take a step back and think about what this equation is doing so so we need to think carefully about what it means to be a solution of this equation dTf minus dHf square equals zero and and just in the in the short amount of time I am I have left I want to describe the first naive attempt and why it's not working so so the the perhaps most naive thing you you should first try to do when you think about what should it mean for a function to solve this equation is ma you know let's just look at a c1 function and then ask that it solves the equation pointwise what's what's the problem with that so so maybe we look for c1 functions that satisfies the equation everywhere and now I want to borrow a bit of what maybe you already have as as intuition about the standard easing model to explain why there will be a problem with that the problem is the following is that like those of you who already know about the easing model know that there is a phase transition going on there so so when the when the inverse temperature which in our setting is called t is smaller than you know nothing's particularly impressive happens and there is no when you vary h the mean magnetization varies smoothly so so maybe you know if I if I draw a picture of the function fn of th of the function of h it will be some some smooth some smooth thing like this maybe this is this is h varying this is the you know then the parallelize the value of the function and this is for for small t okay so for for fixed small t we expect that it will look like this but then when t becomes large we expect that there will be a phase transition and what does it mean well you could understand this as I'm saying that if h is positive but very very tiny still the mean magnetization will be positive and away from zero so so it means in in our context that the derivative of fn even when n becomes very large is is positive and away from zero and instead if if the parameter h is negative even very very tiny then the mean magnetization by symmetry will be negative and away from zero in other words we expect that there will be a jump in the derivative function so so it's like it will look a bit like the absolute value function near h equals zero so let me try to draw it okay that's not a great though you know I want to emphasize that there will be a corner at the bottom oops and that that was supposed to be near h equals zero so this is h and this is for t large okay so I claim that this should be the what the function looks like when when t is large when we vary h and in particular it's not a c1 function you know that there is something that goes on here which is not c1 so in particular here there's no good notion of derivative and you cannot say okay the equation is solved point wise at at this point so we'll have to find a better way to express what it means to solve the equation and this is what we are going to talk about tomorrow so yeah thank you now it's a good time for questions okay so um before uh we go to questions I'm just going to briefly unmute everyone so we can give Jean-Christophe a big round of applause for a wonderful first oops lecture thank you so much
We consider the problem of estimating a large rank-one matrix, given noisy observations. This inference problem is known to have a phase transition, in the sense that the partial recovery of the original matrix is only possible if the signal-to-noise ratio exceeds a (non-zero) value. We will present a new proof of this fact based on the study of a Hamilton-Jacobi equation. This alternative argument allows to obtain better rates of convergence, and also seems more amenable to extensions to other models such as spin glasses.
10.5446/55637 (DOI)
Okay, so welcome everyone to the second day of the online open probability school. Thank you all for coming. It's a pleasure to have our second lecture from Jean-Christophe Murat on Disorder systems and Hamilton Jacobi equations. Just before we get to that, I'll remind you that the lectures are being recorded and posted on YouTube and on the course homepage, as well as being live streamed both on Zoom and on YouTube. So if you don't wish to be recorded, it's best to turn your video off and refrain from using the chat. That said, we welcome discussion in the chat and particular questions for the speaker, which either he'll respond to directly or the moderators will keep an eye on and flag to his attention if we see something important being raised. As with yesterday's lecture, we will probably have about half an hour followed by a short break, a small question period, and then the second half of the lecture followed by breakout rooms for participants. We ran as promised a couple of breakout rooms before the lecture as well. And I'd welcome any feedback from participants about how that went. I didn't pop into any of the rooms myself, so I'm not sure. So having said that, let me also remind you that there's a Zulip server for which you can create an account if you haven't already. Design for discussion about the various OOPS courses. The link is on the OOPS homepage. And that is a good resource for posting discussions that you maybe want to have less transience than the Zoom chat, which is not saved as part of the stream and disappears forever at the end of each lecture. All right. So once again, it's a pleasure to have Jean-Christophe Mourin for his second of three lectures. And over to you, JC. Thank you very much. So yeah, I'm very happy to be back here and to see that there are still many people joining. So I hope, so I thank again the organizers for making this possible. It's a real pleasure. And I'll try to start sharing my screen. Yes, seems to be working. All right. So yesterday what we have done is, so first I try to introduce the motivations behind what I want to discuss. And speaking a bit about this spin glass model, but only briefly. I was also like waving my hands around this rank one estimation problem that I want to discuss later. And then I started to discuss or make some calculations about the QA vice model. And what we've done is see that there is, so if we look at this function I call the capital F sub n. It solves an approximate, it's of an equation which has some part that has a one over n in front. And so we're wondering or it's kind of, it feels kind of plausible that in the limits, the limit object will solve the equation where instead of this one over n times second derivative, there is a zero. And then so, so, you know, we have to pause a little bit and think carefully about what it means to solve this equation. So, so let me rewrite this here. So we need to think about what it means to be a solution of, of this equation, which is the one that we found for, for QA vice, except instead of zero, there was this one over n times second derivative, with respect to this h parameter. And remember that in the setting of QA vice, this derivative respect to h, we can interpret it as the mean magnetization of our system under our gives measure. All right, so, so the first attempt I described at the very end of the lecture yesterday is what about we just ask for our function to be a C1 function and that it solves the equation pointwise. What I try to argue that this is not going to be a good notion for us because, you know, if we kind of use our previous intuition about using models in general, we expect that there will be phase transitions for the magnetization. So we expect that there will be jumps in the derivative. So although our functions are all leapsheds with the, with the same constant, it's not plausible that they will be C1 functions. The derivatives will have jumps. So we need to lower our expectations about what being a solution to this equation should mean. And so, so I said again that our functions are leapsheds. You remember that when we computed the derivatives, I pointed out that they are all bounded by one. And so there is this, there's this theorem of, of, which says that any function that is leapsheds is differentiable almost everywhere. So, so we could say, what about we ask that the equation that is on display here is satisfied for almost every point that that sounds like something we can ask. And it sounds, you know, a second, perhaps more reasonable guess than our first guess. So let me write this. So what about? We ask that F be leapsheds. So I will not write it, but you know, bear in mind that this means in particular that F is differentiable almost everywhere. And then, and then we can ask whether or not the equation is satisfied almost everywhere. And so, and so the equation almost everywhere. And now the problem is that we are facing the opposite effect as our first attempt, which is so initially with our attempt with C1 functions, we discovered that there is no existence, you know, there will be the function we actually want to see is not C1. And now it's kind of the opposite problem, which is that this condition I wrote down will not provide us with a uniqueness criteria. There will be many solutions for a given equation. So let me illustrate this. We say this is the problem, no uniqueness. So to really clarify this, let me start by forgetting about initial conditions and observing that there are some simple functions we can write that are solutions. So for instance, zero is a solution. Okay, that's that's pretty clear. Also the mapping which to th associates t plus h is also a solution because the derivative is one and the derivative is one. So when you do this computation with partial t, f minus partial if partial h squared, that will be one minus one square. So it's also zero. And also if I look at t minus h, that's also a solution, right, because the derivative in h would be minus one, but still the equation will be will be satisfied. And the claim I'm going to make now is that from these three solutions, I can already make new solutions that are going to contradict the uniqueness statement. So consider the following. So again, I'm going to draw a graph for a fixed t. And this is the h variable. And then what I draw is the value of the function. And so I'm going to do it like this. Okay, so it should be symmetric. It's like a 10 function. And the value here is h equals t. The value of the function on the top like at h equals zero is t. And this is t as well. So you see what I've done is that on this part, it's the function t minus h. And this part is t plus h, and outside of this 10th part, it's the zero function. So away from the corners, this is actually one of these three solutions I wrote down. And yes, so James, you're right. But then you see now I'm combining them. And so, okay, so, okay, let me close this and then I'll answer the question. So I hope you're convinced that this function as I drew it is a solution almost everywhere because it's just at these corners that there are difficulties, but we're allowed to throw away a set of zero or the big measure in these th coordinates. So it's a solution. And you see that at equals zero, it starts from the zero initial condition. But zero is also a solution. So from the zero initial condition, we have the zero solution, and we also have this 10th function with this triangle that goes like this. And you can, in fact, decide to make this triangle appear at any time. And you can, in fact, make it be anywhere in space. So you have any number of solutions like this. So to answer James's question in the chat, he was asking if the solutions that I wrote down here are not C1 solutions. So indeed, they are C1 solutions, but at least these ones, they don't have the same initial condition. So when I speak about uniqueness, I mean, you know, given a value fixed at equals zero, will this propagate into a unique solution or will there be many? And you know, a good counter example is the one I drew because this one starts from zero and is not the zero function. Okay, is this clear? Are there further questions? So we need to, you know, it's a bit frustrating. We have to find some equilibrium between these two situations. But one thing that we can notice is that we can be pretty sure that the function I drew here is not the function we care about. And the reason is that we know that the function we care about, it's, you remember, it's second derivative is the variance of the magnification. So in particular, it's non-negative. So in other words, we know that the function is convex in the edge variable. And here we really have a breakdown of the convexity, okay? Not convex. And now the hope is that maybe if we add this further requirement of convexity, then this will finally select for us a unique solution. And I'm going to argue that this is the case. So it's not obvious a priori, but I'm going to discuss with you that this is true. So this is my proposal for the definition of the notion of solution. So we say, oops, we say that's a Lipschitz function. F. So it's from R plus cross R, and then valued in R is a weak solution. Let's collect like this, okay? Of the equation, I'll call it Hj, okay? So Hj is the equation we are going to focus on. It's the one that is here. The one which. So we're going to say this if the following two conditions happen. So first of all, that relation, Hj is satisfied almost everywhere. And second, we have this convexity in H, okay? So for every t, the mapping H gives F of th is convex. So now it holds out that this is the counter example I gave. But what I'm going to argue next is that indeed this selects a unique solution for us. So proposition. Let's phrase it like this. So if F and G are two weak solutions. So if of the equation, Hj, we use the same initial condition. So the initial condition I'll write it like this, zero dot, with G of zero dot. Then they are equal. So Peter is asking what's the intuition behind asking for convexity and we'll see it in the moment when we do the proof. So I'm going to prove this proposition because I think it's really, in some sense, it's the powerhouse of the whole argument. So first before I go into the proof, I want to take a step back and explain why I really want to insist on this proposition. So I think, ah, why do we bother with this? This is a big question. I doesn't talk about my probabilistic model. I want to, let me come back to our probabilistic model, you know, in the, before we pass to the meat anyway everything is finite everything smooth. So, so why should we worry about this. The point is that, ultimately, we want to understand the limits and what we want to do really is a perturbation of the statement that is written. So what is written is that if we have two solutions with the same initial condition, then they are equal. But what we really want to have is, you know, when any is very large we have something which is almost a solution and we want to compare it with the two solution and we want to say that they are close. Okay, so you see that is going to be just a refinement of the same and that is written instead of two solutions must be equal, we are going to say one which is almost solution and one solution, they are close. Okay, so, so, so really this is in some sense a simplification of the problem we're trying to do. Yes, so. Ah, yes, so, so we have good questions so sure is asking, what do I mean by this do I mean almost everywhere. So, so in fact here the, the functions are elliptics. So, each time we speak about the functions themselves. It's really we should really think of it as if you want to uniform type of artist locally uniform type of information so so here I mean really everywhere. So, almost everywhere by continuity you can hide it everywhere. So, so each time they are David, you know, as a rule of thumb let's say, each time we were, each time we write identities involving derivatives, we interpret them as almost everywhere. And each time we write identities involving functions. We will interpret. That is actually everywhere. All right, so I hope you're on board with me that this is a useful statement to worry about we should find a proof of that thing. So I'm going to give a slightly incomplete proof. So I'm going to call it sketch of proof. And I will suggest that you think about how to make it fully rigorous. So, so let's, let's study the I want to study the difference between the these two functions and I want to share this difference is zero. So, you know, I don't have much to do so I have an equation for FNG so I'm going to try to write an equation for the difference between FNG and see what you look like. So I'm going to differentiate into the difference. Essentially by definition it's this derivative squared of F minus derivative of G squared. And I'm going to try to make the difference appear again. So I can simply write this as the sum and then the derivative of the difference. So if I if I give a name to this, let's, let's call it B. This is like by definition I could be. Then what I what I just displayed is that the TW minus B, the HW is equal to zero. And so, so in other words, when we look at the difference between two solutions we can always think of it as, you know, the difference is solving some sort of transport equation the equation that that I wrote here so it's almost everywhere. And now the perhaps to start to answer a little bit the question of Peter in the in the chat like what why do we require convexity here what will be of help release that this B has a it's derivative has a sign you see the derivative of B itself. It's it involves the second derivative of H and the second derivative of G. So, so the derivative of B is positive. And in generally if you were doing the higher dimensional version of this argument, you would find that the divergence of this vector field. So there would be a vector field in place of B, and you will find that the divergence of this vector field has a sign. And the final it has a sign it means something about, you know, the, the, the lines of the vector field being either convergent or divergence. Anyway, this is what will allow us to, to figure out how to prove the uniqueness. Right, so so that was still kind of sketchy but very sketchy. So, so let me start with a rough idea and then I'll try to give a more precise but still with this tiny gap. So, so the rough idea is that we're going to, to look at the integral of W in H and look at how this evolves as a. Yeah, I could. Yeah, okay, let me do this calculation and see if you find it convincing. So, so right now you may worry that this integral is not well defined because it's infinite etc. But I just want to give the idea so I cannot pretend everything is well defined in this step. And if I, if I differentiate the, in T this expression, where I would get the integral of the derivative of W, I won't write the variables again but, but remember T is fixed and we're only integrating in the H variable. And we said that this, this time derivative is, is B times the H derivative of W. So now if I integrate by parts, I can rewrite this as and now I pretend there are no bad returns you know they are at infinity or something. What I get is, is this. And what we said is that this has a sign it's a, it's not negative. So if you imagine for a second that W starts to be positive which means the F starts to be above G, then naturally it will want to come back to being smaller right because if W is positive then this integral will want to be smaller again. And if W is negative this integral will want to be positive, you know, to go back to being close to zero again. So, so you know there is this kind of effect of trying to come back to zero in this, in this relation. So, so there's a lot of things that are still very murky in my argument for instance, I'm kind of pretending that W has a, has a fixed sign which is, you know, there's no reason I probably believe that. And also I integrate over the whole space, which is kind of probably not well defined either. So, so I'm going to now try to fix the argument but, but I hope that, you know, this kind of very rough one line chat, gives you a test of why having this complexity assumption will be of help for us. So, so now this was perhaps what I write down in my draft. Now I'm going to try to do a real proof. So, so that's the, that's the closure of the rough idea. I'm trying to do the, the real proof. So, so first I'm going to try to fix this aspect with the sign I want to force a sign on W. And because, okay, you'll see that transport equities are nice for this. So, so let me define a function. Let's say x square over one plus x square. So, you know, it's a nice, like smooth function which has maximal slope equal to one, and it's always not negative and it's zero only as you. So, so this is the, yes. And, and I use this function to, to set V equals phi of W. Okay, so, so that's phi of F minus G. So, so at least V is not negative. Okay, and our goal is to show that F is equal to G. So it's to show that W is equal to zero. But you see it's equivalent to showing that this function V is equal to zero. So, so let me remind you what's the equation solved by the value it's here. Now what happens for for V. I claim that in fact that's the magic of transport equations, the same equation is a satisfied for for V as well. Okay, so, so if you think about it for a second, the time derivative of V is the time derivative of W times phi prime of W. So, so this phi prime of W is kind of, you can factorize it is it's showing everywhere. And so it simplifies and this this relation is also value. Everyone is fine with that. So yeah, I already kind of cleared out one aspect of my fake proof, which is that now V has a fixed side, it's a non negative. Now the second part I have to deal with is that I was, I was integrating over the whole space so I just have to, you know, instead take an interval. And the problem with with, you know, if I'm not paying attention is that when I do my integration by post argument, it will produce boundary terms. And so I have to catch them back. In some in some way. So, so the way I'm going to do this is that I'm going to, instead of integrating over a fixed interval, I'm going to change the size of the interval as I move along so that it will produce me some extra good terms, if you want, that will allow me to catch back whatever shows up in the boundary terms in integration by post. Okay, so that was still a bit shaky but So yeah, for some is asking why this function five is really I'm just I just want to function five which is your only as your is non negative and has bounded derivative. So, you know, I just picked that one but whichever is fine. All right. So let me let me go forward and try to find this interval. So I introduced a notation before I'm going to, these are the lip sheets constants in in the viable H for F and G and I just send them up and I add one. And I will use this this constant to fix the speed at which I changed the size of my interval. So I keep making this movement with my hands I hope you see it. So, so the, the, the size of the interval of which I integrate is going to shrink as time moves and I just do a linear thing. So at some point it will it will just be a point and then there will be nothing. So I have to fix a finite time and then I start from there. So I fix a finite time. So I call it capital T. And instead of this function that I have returned capital I above, I propose that we study this thing. So capital T. So, so it's a, I hope you try to, and oftentimes I just write it like this so minus RT to RT, I just shorten this capital L, capital L times capital T minus T as RT. And then I just write the, you know, explicitly I understand that there's T and H. So, so T is fixed in the integral and I only integrate over H. And then we, we, we, we let T vary and see what happens. Oh yeah so so first I should say this function J is legit in T. And so I can compute its derivative almost everywhere and it is equal to the integral of this almost everywhere time derivative. And so let's minus RT to RT of DTV. So, so that's one part of the time derivative. Yes, yes. And, and that's so so ultimately I want to show that this function is zero. Okay, so, so what I'm going to end up, I want to control the, the derivative of J. So what we're going to show is that the derivative of J is negative. So, since it's a non-negative quantity starts from zero. All right, so the time derivative of this, this integral is, okay, one part is just different differentiating under the integral. But there's also a part related to the fact that the boundaries of the interval are moving. So the minus L times V of T RT plus V of T minus RT. So, so they both appear with minus sign in front, because the interval is shrinking as time moves. So I produce this term so that they would help me, I want to show that J does not decrease. And so, you know, I'm very happy if I have that the time derivative of J is something with minus some positive terms. So these are, these are just good for me. All right. And now I'm going to try to do my integration by part-sting in this term. So, let's do this. So, so remember the time derivative is B times this H derivative. And now I do my integration by parts. So it's minus integral from minus RT, oops, to RT of THB times V. And I have these boundary terms. So maybe I'll write like this B times V and bracketed between minus RT and RT. Okay, and these, these are kind of more annoying for me that there will be at least one term which, which is not of the right sign. So I'm not, yeah, I'm not paying attention to passing integrals below the, the derivative is below the integral. So yeah, sorry about that. It's not a fully rigorous proof. But I think, I think it's okay, you can just say. So, so, so this part, you don't know a priori, you know, at least there's one, one of these two terms, which does not have the right sign for you. It may, it may have the tendency to let J increase, you know, the one with the, which comes with a plus, you know, there will be plus B, this BV evaluated at T, RT. But the point is that, at least my hope is that it can be compensated by, by, by these minus terms that we have here. And you see why this is true. Is that, okay, so we have V of T and RT, which is also here. And now the claim is that B is bounded by is smaller than this, this coefficient L. So I chose this L so that this happens. So let's verify that I did not mess it up. And remember the definition of B is, is further above, it's here, it's the sum of these, of these derivatives. And, and the way I chose L is an upper bound on these things, you know, I sum the, the L infinity norm of these guys. So long story short, if I, if I take this minus this guy, it's a, it's a negative quantity, I say negative in the sense of, maybe zero or it's a, it's in a wider, I mean less than or equal to zero. And in other words, these boundary terms don't bother us and what we, what we have is just this. Okay. And, and now finally, as promised, this is non-negative. And this also is non-negative. Maybe I should say just positive. I never know if I should say positive or non-negative. It's greater than or equal to zero. And so the whole thing is, is less than or equal to zero. And, you know, that's it, that's what I wanted to show. So DTJ is less than or equal to zero. And what do I, why do I say that's all what I wanted to show is that J at zero is zero. And J is non-negative, right, because it's the integral of this V function as V is, is non-negative. So, so what we have shown is that we conclude that J is, is constantly equal to zero. Okay, so, so that means that V is equal to zero and, you know, perhaps initially you can only conclude that it's almost everywhere because we have this integral, you know, we integrated against the Lebesgue measure, but it's a Lipschitz function. So we discussed before, back to continuity, you can complete and you just, you just conclude that V is equal to zero and this means F is equal to G. Okay, and that's, that's the end of the argument. I do a triangle because some parts were not completely rigorous. So, yeah, for that was pointing out that I was not really paying attention to interchange of derivative and integral. That's true. And also another aspect which I, I was kind of careless about is that if you look at B, it's not really differentiable. So, I was using this. I'm sorry. I want to display the definition of B again. It's here. Square it. I used this derivative of, of the respect to H. And, you know, it's not clear that it makes sense because I only know that F and G are Lipschitz. So in particular they are not twice differentiable. So, my proposal for as an exercise and I think some of you are taking this for credits or so so so the exercise is to make this proof rigorous. And yes, so let's say this is part of the credit assignment for those who take credits. And if you, so the idea I think if you really want to make it very gross is to convolve a little bit by a smooth corner. And if you struggle you, you can, you can have a look in the book of Evans on on TV's. So I think it's, it's already almost late for for breaks or anyway, let's take a break now and I'd be happy to take questions. So are there questions on this. Any questions in the chat. Yes, so who is asking if this is equivalent to the notion of this question. So, in the case when the initial condition is convex and Lipschitz, then you'll say is yes. But so what what really makes the this proof work. So so you can weaken the proof in several ways. You don't really need to assume that it's that the function is convex in the age it would suffice that it's semi convex, you know that it's you just have a lower bar on the hash and maybe it's negative but that would suffice. And yes, so maybe I just should not have scored it. So in general, you can try to see what happens with with more in the even higher dimension like if you try to make sense of the questions of this form. And the, the thing that really makes the, the proof work is, is that this noniniti is convex. So if you think carefully about how we are doing the argument in the proof. This is, this is really what is necessary for, for the things to work out nicely. So really the power of the notion of this question comes mostly when, when this noniniti is not convex, or you know I should say, neither convex nor concave. But you know, whenever the noniniti is convex or concave, the two notions will coincide. So partner just saying what does it mean from the physics point of view. I mean, for starters, it's not perhaps completely easy. What it means this function fn. Okay, or maybe you can. Yes, so so it's not an fn. Okay, so free energy. And yeah, the fact that it's convex in this age parameter is a, is a very robust feature of the models in each time you, each time you construct this free energy like this with the parameter times some quantity, when you differentiate twice, the second energy will be a variance. So you will always have this convexity property. And I'm not sure exactly how to phrase it best in terms of the physics but I guess you could say that it's the convex conjugate of some other reasonable function of physics which, okay. Yeah, okay I'm not sure I think I think what what could do a reasonable argument. So, Jeff, I'm just saying the general transform bag is the question is, is the general transform or what right and maybe is the general transform. Yeah. Okay, maybe the general transform of the of the large division functional or something. Okay, so maybe that's a good moment for just a two minute breather and we'll resume very shortly does that sound okay Jean Christophe. Yeah, absolutely. Great. So, this is another comment in the chat from Peter. Also, I'm not sure how to comment. It's 1245 so it's a good time to pick up again anyway. So, I think means when you do this, an essential agent transformer in and by definition is the connex function so, so, in my opinion, if it's a general transform means it's going to come back. I'm too much of a mathematician to be sure but what free energy means when I don't write the formula so. So, yeah, so it's possible that in some cases it makes sense to speak about free energy which is not convex. I'm not going to do it too much and I'm sure if I would, I think I would probably say stupid things. All right, so, so I have a perhaps one more comment. I want to insist very much on the existence aspects for this equation, because in some sense for us this comes for free we're studying a model and we're wondering if it converges to something. The only thing we need to worry about is to show that this thing is converging. So, the fact that it will, you know that we will produce some solution is almost given because any limit point basically should be a solution, if we are not giving a bad definition of solution. So, perhaps another way to say it is that you could try to run the proof that I just gave you just instead of speaking about solutions you would use it to try to show that the sequence of functions fn is a Cauchy sequence. Okay, and you would manage to do that argument. So, so you will not know your existence of solutions, but you would show that this is a Cauchy sequence and so therefore it converges and then the limits that you would show it's a solution. So, yes, but in case you're, you're, I think it's still, you know, despite the fact that I think it's not critical to worry about existence aspects. I think it's useful to have in mind that we can write down a formula for for the solution. So I'm not going to dwell on this too much, partly because time is running short. So in the settings that I, that's where I exploring the function. So if I if I take the supreme of psi of h minus h prime minus h prime squared over 40. So this is the way solution. And I say V because we've shown it's unique. So in the beginning, you just let's just say that size is convex and lip sheets. And then. So the claim is that if you write down this version of formula here for F, then it's actually the solution of the equation. Okay, so, so in general, what what appears here is a is related to the, the convex do all of the noniniti of the equation. So, so in our case the, the noniniti was just the square function. So you should write with a different color. Maybe I write P is P square. And the convex dual of this function is the CQ gives q square over four. And so you can you see that there is some q square over four showing up. So, so in general, it's the, yeah, this expression is related to the convex dual. So, so I'm not going to to prove this. But again, the fact that a formula like this, you know, one can write a formula like this is related to the fact that noniniti in the equation is convex. So yeah, I'm not going to prove this but see events if you want to see a justification. And yes, so let me propose another exercise. So not the sign for credit but just for your thing it's useful exercise. So, so in the setting that we that we have we have a specific definition for psi. But what what I really want to use about psi in this exercise is that it's a smooth function. It has zero derivative and it has a positive second derivative at zero. So we show that's for a key smaller. So you have still zero derivative for for the solution at zero, you can use the formula for for for doing this. And for and for t finite but large. And on the other hand you have that the derivative on the positive side is positive. It's kind of implied that the derivative on the other side will be. So, so for using the formula for instance you can, you can show that there is a face transition in this derivative. So, I hope I'll try to my goal is to wrap up the study of the case model from from what we have now. And perhaps before, before I so what I really want to show it to me is that this sequence of functions fn converges to f I think it is by now has become clear. But I've always, I'm often emphasize that, you know, the physics, the information that we perhaps most care about is this mean magnetization, which is included in the derivative of the function. And it's not here a priori that's by doing the convergence of the function we understand something about the derivatives. So, missing in the thanks for that. Yes, I think you can you can compute the but then so for fun is asking if we can identify the precise day at which the transition happens but this requires a more subtle analysis but this is possible. So you have to this exercise is more showing showing the other side is easier than than showing what you asked but it's also possible to do what you say. All right, so, so yes I want to first explain that indeed we understand if we understand the convergence of the functions actually we do understand the convergence of the derivatives as well, which is perhaps not obvious a priori. So, let me state this. So if th is the point of different ability. In each of F. Okay, so maybe this is not. And if you want to be precise, differentiable the only as age varies that would be sufficient. And if fn converges to F. Then actually the derivatives at this point th converge to the limit. So, so really if we understand something about the derivatives of the limits. Actually we do understand something about the derivatives before passing. So, let me try to prove that statement. So now I'm back into the setting of the correct model right fn is the function that was defined in this setting. And the engine of you know the the reason why the statement is is true is that the function is convex in age. If I look at fn of t prime h prime, this will always be above the tangent at the point. Th. Oh, no, I just mean, sorry, I just mean to move h, I just fixed it. So, I just added this property because the function is is convex. Yeah, the convergence is for this locally uniformly. Okay, so this is by by assumption because the function is convex, and I assume that's a, you know, so this is bounded. So, let me make a subsequent so long which this converges. And let me call the limit along with subsequent speak. Okay, so so then what happens when I let n 10 to infinity so so as a was asked in the chat, the convergence is is point wise so. Yeah, so point wise or uniformly will give the same so let's say point. If I pass to the meat, then it's becomes f of th prime is always above f of th plus this slow P times h prime minus H. And so I found a, I found an affine function which touches f at the point th and which is always below the function f. And if the point is a point of differentiability there's no other way that than for P to be the derivative of. Okay, so so P must be the H F. Okay, and that completes the proof. Okay, this one is actually fully removed. Yeah, we do understand what happens for the true problem, even at the level of the derivatives, if we understand the convergence of the function. All right, and in the short time that I have left I want to complete the convergence of this function. Okay, so so now I hope I. So, so someone named K as why such a subsequent can be found the derivatives are bound to remember that the function is deep sheets so I can always extract the subsequent. Yes, so I hope I convinced you that the convergence of f n should be like some sort of more refined or like a perturbation of the uniqueness argument. So we're dealing with a function which is almost a solution and we'll say it's close to the true solution. So, so let me recall you the equations for DC f n minus the H f n squared equals one of the N, the H2 f n. So now it's not exactly solution there's this extra right hand side, and I want to, I want to still stay say that it's close to the true solution, I still call the little f, the true solution. And so, so really I'm trying to mimic the same argument I'm defining the difference. And so, because when I define the difference, it solves an equation like before. This is so it'd be a B times the HW. So I don't write sub n but it depends on and also the value and and then on the right hand side I have this, this extra. So, so where B, where B is as before is the sum of the other derivatives of the function to the H f n plus the H. Okay, and now, what does it do before. Yes, I took them. So I will apply this function five to make the difference be have a sign. Okay, so, so now then I said V equals five of W. And before it was really nice it solved the same equation but now I have to be a little bit more careful. So I have DTV minus B, D H equals. Okay, I have this five prime W which is still kind of around so it's five prime W times one over N, D H two. Okay. So you see again is it's really very close. Yes, same file x square over one plus x square. So it's very close. The argument is really very similar to what we've done before. And, except that there is this, you know this bit of right hand side that we want to see is small. And so so we define again this J of t, which is integral from minus RT to RT of V of THTH and RT is defined as before you know there is this constant L inside but L is the sum of this. And so we can take the same, we can take the same number of times. And we can take the same number of times, you know, of the same numbers of times. And so you can, you know, we know that these H derivatives are all bounded by one so if you want you can know the way I define L was just about three times. And so I'll just write it so five prime of W times one over N, D H two, FN. Okay, and so it's of THTH. Yeah, I should have written W of TH. We have to deal with this term. And the nice thing is that so five prime is bounded by one. And this is non negative. Okay, FN is convex. So I'm using this again here to deduce that the time derivative of J is bounded by integral from minus RT to RT of, okay, maybe I put one over outside of the second derivative of FN. THTH. And you see I don't put absolute values, because I know it has a sign. And it's very convenient for me because then it's the integral of the derivative. So, so this is one over N times the derivative only one time, the first derivative of FN between the two bounds minus RT to RT. And now we've said several times that the this age derivative is bounded by one. So this whole quantity is bounded by two over N. So, so what I what we have proved here is that the time derivative of J is bounded by two over N. And recall that the initial condition is is a the coincide right the initial condition of FN is constant and is the same as that for the left. So J of zero is equal to zero. And it's time derivative is bounded by two over N. So what we have shown is is that J of T is bounded by two T over N. Okay, and then, no, I, I kind of close this here because I'm kind of running out of time a little bit but you see that this this should kind of close the argument at least in this L one in age and L infinity in time. We, we have a good control on the difference between FN and F. And now if you want to have really point wise control on the difference between FN and F you can upgrade because you know that all the functions are leapshets. And I tell you that two functions are close in L one, and they are both leapshets function then you can deduce they are close in L infinity either maybe you lose a bit in terms of the rate but you can still, you can still do that. So maybe I'll add as an exercise. Again, not necessarily for credit but just clean up this. There were a few things that were not very rigorous. So, so again I was, I was kind of, yeah, so that's, I think the thing that is most important to clean up is how do you get point wise convergence forms, kind of L one convergence. But in any case, so, so now, okay, we can breathe that's the end of the argument. And I really want that we bring with us for for the last lecture, an important message, which is the following. So, keep point. So, so when we are going to face the, the problem in the rank one matrix. Again, we will find some quantity similar to FL. Sorry, there's a question in the chat about where does this bound to over and comes from. So this is, is the difference of, of two, you know, the evaluation of the derivative at two points, and each of these guys is bounded by is between minus one and one. So I just, I just sum them up and it gives me a two. Okay, you remember this age derivative is the mean magnetization. So it is between minus one and one. Yes. So, so in the future we will have a, we have a very similar situation with a function which almost satisfies the same equation in fact, and we're going to try to show it converges to the true solution. And the key point to remember is that's what we need to, you know, if I write an equation for for fn or some variant with some, let's call it some some errors in on the right hand side. So what I have to do in this argument is to is to understand the some L one estimate some some integral over H of this quantity. Okay, so, so the key point I want us to remember is that to me right like this so if there are error terms on the right hand side, I hope you see what I mean by this like whatever shows up on the right hand side of the equation that we can don't like and hope goes to zero. So, to estimate it in L one in L one in the age variable. So, if you want to, a short summary of the statements is that I want to local L infinity in time L one in H estimate. So the claim I'm making here which you know you can play with this if you want, is that you can write down a marginal version of that statement we just went through, in which there is something that sort of an equation with some right hand side. And if the right hand side is small in this sense like locally, and if he didn't time L one in H, then the argument will work and we will manage to show closeness with the real solution. So that's, that's the main thing I want us to remember for next time. And I'll stop here so I think it's a good time for questions. And just before we go to questions. I think I don't know how the rest of you felt or how all of you felt but as the organizer I think it's very nice to unmute everyone and give random applause. So thanks again for a lovely lecture Jean Christophe. Okay, so I've just muted everyone once again but I've now at this point given you the ability to unmute yourselves. So I think it's good to ask Jean Christophe questions or you can continue typing them in the chat if you prefer that. Over to you, JC. So I see a question in the chat from now off. You're still muted. Perhaps. I'm sorry, I was muted. So I had not paid attention. Yeah, so so the first question is about the hot flags formula and what conditions there are on the initial condition side. So I did not try to find optimal, like if you, if you want a nice, you know, a nice statement that that kind of will be completely sufficient for our purposes, we can just take site to be convex and libshits. But if you want to make a more kind of robust theory, you can, you can in fact justify it for any. Okay, maybe you have a bit of regularity. Okay, maybe continuous would be enough. But at least you can completely drop the convexity assumption. Then you have to reword a little bit. So I think that's the way I would say that it's a way of resolution. So the way I wrote it, the way I wrote what it means to be a resolution. I said that it's convex in age. But if you start from a condition because for us, everything is convex in age. So, you know, I just want to bother. So if you have initial conditions which may not be convex, then at the, you know, at the very beginning, it will still not be context. It will not just completely tilt over the function. So one way to fix it will be to ask for the function to be locally semi convex, which means So. So let's say for every delta. There exists. There exists. See delta. Such that for every T larger than delta. So the function H gives f of th plus C delta a square is convex. Okay, so, so if I wanted to just say it in words, you will have a lower bound of the second derivative. But maybe this lower bound becomes degenerate as you approach T equals zero. So this is the first question. The second question is, is there a modification of the hope tax formula for the pre limit PDE in FN and the answer is yes. And it will be, it will involve a little born emotion part. So, so really what happens in this complex formula is that you're optimizing over path. So if you have a bit of viscosity, or if you have some viscosity, this the second derivative, if you have some of this second derivative, then instead of having this kind of deterministic path, you would have a more important path that you're trying to steer in the right directions about so it would be some exactly I think it would just be some expectation of some exponential of more emotion. I forgot exactly the formula. We can. We can work. But in the, in the setting of tomorrow's lecture we will be not have a completely closed formula for FN. We were to write the formula of this sort before we pass to the limit for. Okay, so not hearing any further questions, at least not instantaneously, I'm going to recreate.
We consider the problem of estimating a large rank-one matrix, given noisy observations. This inference problem is known to have a phase transition, in the sense that the partial recovery of the original matrix is only possible if the signal-to-noise ratio exceeds a (non-zero) value. We will present a new proof of this fact based on the study of a Hamilton-Jacobi equation. This alternative argument allows to obtain better rates of convergence, and also seems more amenable to extensions to other models such as spin glasses.
10.5446/55638 (DOI)
We're referring to music in the chat. Related to the chat, we welcome questions during the lecture. The moderators will try to keep on the questions and flag anything important to the speaker. And JC also does a remarkable job of moderating the chat himself and responding in real time. So, please feel free to ask questions. If you do, it would be very nice for you to use your full name as your display name if you feel comfortable doing so. Just to create a feeling of real person-ness to our event, let's say. So, once again, we'll plan on roughly an hour lecture with a brief pause in the middle for a breather. And I think with that, maybe I'll just also remind everyone that next week's course is being given by Gadi Cosma on critical and near-critical percolation. And that course will run on roughly the same schedule as this one, Monday, Tuesday, Thursday, starting at 1600 hours UTC. So, we hope to all see you back for that course. Okay, with that, it's a pleasure to... My final correction next week is Monday, Wednesday, Thursday. Ah, thank you for the correction. Sorry about that. So, we'll send out the link to the Zoom meeting to the mailing list as usual. Okay, with that, I'm going to hand it over to Jean-Christophe Morin for the 353 lectures. Thanks again for being here. Thank you very much. So, yeah, again, thanks a lot for making this possible. So, let me try to share my screen. All right. So, yeah, thanks. I hope you enjoyed it. For me, so far it's been a good experience. So, I hope it's also a good experience for you. Make sure I see the chat. Yes, that's good. All right. So, in the last lecture, what we have done is... So, in the first lecture, we saw this Hamilton-Jakobi equation arise from this Curie-Weiss model. And in the second lecture, we tried to understand how to justify the rigorously, the convergence to the limits. It was almost satisfying the first-order equation, and there was some one-over-n type kind of error term. And we tried to understand how to show that in the limit, indeed, this error term disappears in some sense. And that was for the Curie-Weiss model. It's a very simple model. But now, as promised, finally, I will discuss the problem of estimation of a rank-one matrix. So, before I go into this, let me mention that this has been already solved by other people. So, I think the first that gave a complete resolution of this problem is a paper of Le Large and Moulin. And it's also a good reference for looking at... They give a lot of context into the problem, so it's a great read. And shortly afterwards, Barbier and McFris proposed an alternative method to solve this problem. So, I think both papers have been published in PTIF last year. And now I'm going to discuss yet another method based on my own perspective with this partial differential equations. All right, so if you don't feel too confused by the first two lectures, I will start by describing the problem. So, it's introduction, and also I will state the main results in this part. So, this is the problem. Let's say that we have a vector of independent random variables, and a bounded also. And the lower I will write P sub n. So, it's really a tensor of one single measure. So, they are independent and identically redistributed. And for some t, positive, we observe the following. We observe. So, I'll use matrix notation. So, we observe the matrix Y, which is some factor, which, okay, it's convenient to normalize it as square root of 2t over n. Times this rank one matrix X bar transpose plus some noise. So, the noise is very simple. It's just a big matrix full of independent guess. So, if you don't like matrix notation, we're observing products of X bar I X bar J for every pair I and J between one and N, and this is multiplied by some factor and then there's some noise added to this. Okay. And so again, I don't assume any symmetry. It's not critical. So they are really completely independent. So, the t in some sense encodes a signal to noise ratio. So if he's very small, your signal is very weak is going to be hard to recover information about X bar. If he's very large on the other hand, the signal to noise is very good. We have a good chance for history we can be optimistic that there's something to be done about understanding what X bar is doing. Okay, so, so we want to recover information about X bar given why. We want to understand how well we can do if we are not going to worry about algorithms. I'm going to assume we can find the best possible estimator given the data. So for instance we, we may want to understand the. So for instance, we want to understand. So we're going to assume the minimal mean square error, which depends on N and on this parameter t. So I normalize by one over N square. And this would be the best thing we can do like the best estimator so the infimum over all functions, which I could get a hat here of expectation of X bar X bar transpose. Whatever function of the data of the observation, why, and then I squint. The T and the square. Oh yes, haha bad notation. Thanks. So I should not. Yeah, so I'm used to writing transpose like this but that's bad notation. So maybe I should write like this. Yeah, so I hope each time T is an exponent it means transpose. Thanks for the question. Okay, so, so here. So in this in this expression I use this absolute value symbol and this matrix inside. So I just mean you use some the square of the coefficient. So you can just think it's a big. And, you know, by the properties of the L to know the best possible estimator is just the, the conditional expectation of that variable given the data. So this minimal means where you can rewrite it as one over N square expectation of X bar X bar transpose minus the conditional expectation of that same thing X bar X bar transpose, yes, so I already, you know, I already gave up on trying to do not transpose the property right so so he is only a transposed. Okay, so, so this is the first motivation to, to want to understand the conditional law of X bar given why. So I was just some, you know, so someone is asking what is the meaning of it is just I'm trying to do the info for all possible algorithm if you want to all possible functions of the observation. And I'm thinking what is the best possibility. But this is just an L to production. So, so it's the conditional expectation. Yeah, I will not justify this but it's a classical. So I want to, I want to compute the, the conditional law of X bar given why, and I will not do it rigorously, but let me do some informal calculation. So in this case will be a bit faster than usual. Okay, so, yeah, so someone is asking about normalization. You see you will have n square terms inside the, the matrix is n by n. So, at least, you know, it's bounded by one if I put one of our n square. I'll try to maintain a bit of a faster pace than, than before today I hope you will be on board with me I mean, feel free to still ask questions but please don't be. Sometimes you will have to take my word for a few things, but but for normalization I hope it's clear. So, so if I try to calculate the, the conditional law of X bar given why, and, and if I just do very shaky and non-rigorizant calculations. Sorry, that's what I meant. If I try to calculate the probability that X bar is equal to X, and that this capital Y is equal to Y. Well, okay, so, you know, it's not a rigorous thing but maybe, you know, I can think of it as be like being like DPN of X. And then, so once I have sampled X bar, if I want to know that if I want this capital Y to be equal to the Y, I have to basically prescribe let me display to you again with this capital Y is displayed here you see so so if I have fixed this guy. Now I have to impose that this W be Y minus the difference. So, as I said, it is a Gaussian. So what what comes here to complete this will be this kind of Gaussian terms so exponential minus one half of the square of that value. So it's Y minus square root of 2t over n X bar X bar transpose everything squared. Okay, so this is the Gaussian part. So it's not very gross and I apologize for this but I will give some at least vague intuition that when we want to compute the conditional law of X bar, given why this will look something a bit like this which will be this, this exponential that is above so maybe I will not write it DPN X and I will divide it by. Oh yeah I should write it away. Divided by integral of exponential of minus that same thing. Maybe DPN of X prime. Yeah, maybe I should. Okay I should have written this basically. I should have written it back so one half Y minus square root of 2t over n. Oh yeah, making plenty of mistakes. So here there are no bars. X prime X prime transpose squared DPN of X prime. Okay, so I'm kind of rushing it a little bit but it's it's a little bit. You know, I hope you, you, you kind of get the gist of it. And I want to, I want to expand the square which so so inside here I should have written it but there's also a square. And there's a one half. And it starts with a Y and then I'm most done writing it then there's some X inside here. So just expand it and remove the absolute value of Y squared. So, so, so let me, I just keep the two other terms, the cross product and this X X transpose square. So let me introduce a definition. So let's set H and zero of T and X to be. So, so the cross product between the two terms is squared of 2t over n, why dot X X transpose and again, these are matrices that I just think of it as the entry wise dot product minus. And then there's the square, which was divided by two so it's T over n. And then it's X X transpose everything squared so it's just the normal X to the power of four the Euclidean normal. Okay, so that's that's, I'm going a bit fast but in some sense, I want to rush through this because the crucial, you know, the exact definition of this is not absolutely crucial. So the point. Okay, so so here you want you should remember that why is is this sum of two terms or there's this X bar X bar transpose, and there's this W. And I'm going to rewrite it explicitly in this term so H and zero of T and X. So inside why there is this noise W. And if I, if I expand this term I have. W dot X X transpose but I can I can write this like this. And then I have X bar X bar transpose, which is going to be dotted against X X transpose. So plus two T over n. And okay up to some, you know, little manipulations, it's just the dot product of X and X bar squared minus T over and X to the four I don't do anything with this guy. All right, so I rushed to this but let's let's let's take a deep breath now and see what what this says. So, so so this term is the most important term, in my opinion. And you see, it's really like the spin glass case I discussed at the very beginning, except I changed the names of the variables, just to see if you're following. So I think that W was the J I was using at the very first lecture, and X is the Sigma, then you see this is this sum of J ij Sigma Sigma J except I changed the names. So, so this is like the spin glass. And this we'll just forget about them. They are just here to help us in some crucial places we see very shortly where, but other than that, we can mean they're not negligible in the sense that they are smaller. But in some sense, they are just here to make our life easier. So, so let's just not worry about them. So I want to try to make the connection between the, you know, the spin glass model and this model. And it's really showing here right it's when I write the, when I write the conditional law of X bar given why it's it has the form of a gift measure with at least the most important term, at least my opinion, let's say, looking like the spin glass case. All right, so, so let me try to wrap this up so so we don't see this anymore so so we have defined this function. Now, so, so let me remind you the context. So, there is w and X bar that are, you know, that are inside the probability space if you want X bar is this vector of independent variables. W is this matrix full of noises. And then I define this function of X. This is just a function of X and is random because it depends on on W and X bar. So, in this case, I want to say the associated gives measure. And I will, I will use the same notation as for the previous model with this bracket, and I use this little X as the random variable inside this bracket. So, the, the, the, the gives measure according to this function so integral of so instead of writing some over Sigma as before I write integral. So it's in a strike my fancy to write integrals. So, the X exponential of this function against the reference measure divided by integral of X of H and zero of T and X. Okay, so, so that's how it gives measure and notice that the, the important difference between the curve is model and this model is that now our energy function here is still random right. So any realization of X bar and W, we have defined a this poverty measure, it is this poverty measure with the bracket. Is that clear. Okay, are you. Okay, I told you I would go faster. No good questions. So maybe I stated as an exercise. Okay, so some people take this for credit so so let's say it's an exercise for credit. So I rushed through a kind of heuristic not rigorous calculation. And I'm asking to, to, that you make this rigorous. So, so show that's what I was trying to argue is that, you know, the way I derive this heuristically is I wanted this to be like when when I sample X according to this measure it's sampled according to the conditional of X bar, given the observation given the Y. And so I'm asking to, to, I'm asking you to justify this, which we can encode like this so for any reasonable function, the same measurable. And let's say bounded. We have that this identity is valid. And this this bracket thing indeed the and cause the conditional of X bar given Y. All right, so before I move further, I want to justify why I say that these terms are here to help us in some sense or why is it that this model which I said looks like the spring glass model is actually simpler than the spring glass model. And so here is the important property. Sometimes people call it mission or a property. Okay, so the first thing I will say is actually a weaker version of the mission or property but then I will build upon this. So the first thing that you can observe from, from this identity above is that. So remember this, this bracket measure is still a random measure. So if I want I can still average it over X bar and w. And, and what happens if I if I take this double average is that well you know it's it's like taking the average on the right hand side. So I can, you know, it's a conditional expectation and then I average over that thing so. So in fact what I get is the same as the expectation of F of X bar. And now I think the first time I feel the first time I saw this I felt a bit confused I think I was. And the hard time even distinguishing was this viable X, you know why is it different from this viable X bar. And what's perhaps a useful guide for the intuition is if you imagine the observation is, is, is empty, T is equal to zero so when you look at why you learn nothing from X bar. So this viable X is a re sample of X bar you know it's it's it's if you want to send independent copy of X bar. On the other hand, if Y was revealing perfect information of for X bar on X bar for you then, then X will be equal to X bar. So, so the, the coupling between X and X bar is a, you know, is a certain thing that's kind of interpolates between these two trivial cases. So they are not the same but still this this identity is that. And, and beyond that. So, so what shows up in in spin glasses and also in this model is, is that oftentimes when we do calculations. We make appear a new variable, which is an independent copy of this X viable. So this is an independent copy of X under this bracket measure so it's also sample, according to this conditional law of X bar given why. So the claim I'm making that. Okay, so so maybe I do the calculation and then we'll see what what it gives. So, so in physics terminology, they call this independent variable replica. So, so if you want X prime is a replica of X. So anyway, so, so if if I look at this product situation then. Well, they are independent conditionally on this observation why so so I can alternatively rewrite this expression like this. And now I can use the. Let's say on the second one I can rewrite this as that it's the conditional expectation, you know it's it's a law of X bar given why. So I can write it like this. And now remember that this this law is random so I cannot do whatever with with this so but it's, it's also conditional law. Sorry, it's, it's, it's, it's measurable with respect to why this this randomness only depends on the observation only depends on why. So really, if you want insert this thing inside the conditional expectation, and then in the end remove this conditional respect to why. So what what I have argued is that this is the same as the expectation of f of X times g of X bar. I think it takes a bit of time to digest that this is not a trivial identity but it is not you know I try to argue that's when you when you do this sampling, it's not the same as saying that X is equal to X bar sometimes in the depending on what why is doing it maybe X is equal to X bar it maybe X is a resampling of X bar so so this. It is not trivial and it's only true because I saw average outside case not too only inside this, this single bracket. But this is really the engine, like the. This is the reason why this problem will be simpler in the end of the day when we will try to identify the partial differential equation like we've done in three of us. So if you have these these ordered models does is replicas that show up and in some sense you feel your, your. Each time you're taking a derivative there's a new replicas that shows up and you don't know how to control it because you know, as you take more and more derivatives are more and more replicas that's what. But in this problem it does not happen because you can, you can kind of catch it back you know when when a new replica appears. So in this relation you can just link it back to this original signal this original X bar that is in the form. So, okay that's kind of fuzzy talk but it's to give you an idea. And, you know, once you have so this here I put for, you know, like something of product form f of X, the X prime. So if you want to use this for this product form. You, you can, you can extend it to two more general functions. Yeah, so, partner need is asking, is this the Nishimori property. And the answer is yes or maybe it's the version where you take a generic function. So it will be that's when you have these two copies, you know the function of these two copies. And the same as the function of X and X bar. And in fact you can still play around and generalize it some more so so if you have three independent copies, you can replace one of them by X bar if you want. So only one of them. So, so you can only do the argument I showed once. But in general, you know you can take many of these replicas. And if you want to can replace one of them by this X bar. Okay, so so that would be a crucial tool for us to make our life easy. All right. So, let's take a short break. All right, so let's take a short break. Okay, so I have several questions in the chat so I will answer them. So, so Richard is asking what if X prime is a dependent correlated copy of X. So, so I think then it's a bit unclear but it's a bit rational for why I want to study this we will come shortly, like, when you compute derivatives with these systems. There are these correlations that appear and you can represent them using these independent replicas. So, so it's really natural to focus on these independent copies setting because that's what shows up when you compute derivatives. So, so Richard is asking in the definition of why what motivates the parameter T so so you, you would like to have a parameter that allows you to, to say something about the strength of the signal. So, so ultimately what we would like to see is that if the signal to those ratio is too small, then we will not be able to recover good information of the signal. So the, for instance, if we study this, minimal means where it will be maximum, we will not be able to do very much on this error. And then after this, this, this T is increased sufficiently something happens and suddenly the minimal means where it's starting to decrease. So I want to, I want to see this transition as T is varying. So, so, so that's the main motivation for, for the intention of this T, so it's the money told this, this change of behavior as T crosses a certain value. And then I will not have time to do all the steps towards this because we will just identify the PDE and then you can recover the minimal means square or by looking at the derivative of the limits and I will not have time to explain this but but you can really recover this transition. I have plenty of questions that's pretty cool. I appreciate. So x bar is that thing that we want to recover. Okay, so we are given x bar. I mean, rather it's kind of in nature somewhere and we're trying to understand what it is. And what we observe is this x bar x bar transpose plus noise, and we're trying to recover x bar. And instead of x bar we have our best guess for x bar which is this conditional law of x bar given why which you know in formalize is convenient to represent using this viable x, which in some sense represents our best guess for x bar, which is different from x bar. Maybe really. Yes, so, so, so deep I said it says that he's confused with all these excellent experiment x bars. And I agree is confusing. So really, perhaps what I just said is is a helpful thing to keep in mind is that x bar is that thing we want to recover. And in some sense x is representing our best guess for x bar as a random variable. It's something that has the law of x bar given what we've seen. So it's not x bar but if our observations were really good it would be really x bar. If observations are not very good then it's kind of with some randomness inside and x prime is some independent copy of that same thing. So, for that is kind of finds my location kind of confusing with the with the bracket and the f of x. And it's always sort of confusing but it's a. So, so the way I use notation x is not really the same when it's here and when it's there so so he is just the integration viable. So it's kind of a normal way to understand this. And here I kind of understand that in this expression I understand that x is, if you want the canonical number of over the private space. So, so that that's, yeah, that was suggesting that instead of writing x I write f of dots, which, you know, I think we make sense in some rigorous point of view but I find it a bit less confusing to write it this way or it's a bit unclear, you know, he just shifts the confusion around in some sense. So Adam is asking, why don't we study this thing. So what I was saying is that what you want really is to is to look for the for the best possible sorry for the for the best possible to the hat that minimizes this quantity. And so, and it turns out that the best possible thing is actually this conditional expectation. So if you want, once you have your observation, your best possible guess for for x bar x bar transpose is actually the conditional expectation of x bar x bar transpose given is kind of a theoretical and abstract way to state it, but that's your best guess so. And, and that's, there's a property of the conditional expectation that it's projecting in this space. I'm a bit confused about the question of Zachary. So, so first, I will also the next question. Indeed, the noise the value is independent of x bar, not said it. But yes, I assume they are independent. And maybe is it for Zachary, I will. Let me try to wrap up what I've done in a slightly different wording and see if it helps. So, so let's, let's forget for for a moment about this status care inference setting. So, so we have some random variables that are given to us. They are x bar and w. Out of these random variables. I build this function. It's called hn of h and zero of T and x. So it's a function of of x and somewhere in this thing there's the blue and x bar. Okay. And once I have done that I did the the gist measure out of this. Okay, so it's a it's a gist measure built on this function. But it's, it's still a gist measure that depends on the realization of x bar and w. And I want to study this function. Okay. Does that make sense. So, so really, like if you want to outside of the problem we have the source of randomness which is w and x bar. And then we build the gifts, a gist measure out of for each realization of this randomness we will build a gist measure, which is denoted here, and I want to understand it. Does that feel reasonable. Yeah, awesome. I also felt confused when I started to think about this. Don't feel sorry about. All right. So, so I hope now we are back so so. So we want to study this, this gist measure, we want to understand it. And like for the previous model. So the thing I want to focus on first is on this normalization constant if you want on this, maybe we could, we could call it partition function, because we understand that you know maybe later we take the river to do something so you know it's like this moment generating function, it's a good thing to study that quantity first. And so I want to study this thing and I want to apply the same strategy as before. And so I could define you know take the log. And then that would define for me a function of tea. And then I will be looking for a PD for this thing. For partial differential equation. But the point is that I only have tea in this function. And it's a bit like if I was describing to you the curry vice model, and I had not thought about adding this age parameter. So if I were to introduce the this tea parameter in the previous model. I would have been stuck, I would have been like, you know, differentiated in tea many, many times and never finding a closed equation. And here I'm a bit in the same situation I have this parameter t but you know what what do I related to, like, in some sense, what what we have done in crevices, find the connection between this to the retail and this edge derivative. And we made the same game you know we have to find something to add to this energy, like this age term in the previous model that will allow us to. So, so that first will be simple like in the previous model this age parameter was seen. But also will be sufficient that we can relate the derivatives. And so you know when you look at the energy function. There is this term which is quadratic in X like in the crevice model you know this first term which has this sigma sigma j is quadratic in the sigma. And so what I would like is to is to add a linear term in in X in my mother, like this, some of sigma i that we've done for the previous model. And I want to do this with a constraint, because I told you it's a it's an important property this niche property. And so I want to make sure I do not destroy. So, so I don't want to, to look at the Hamiltonian at this energy function here, and just write okay plus blah blah blah okay I don't want to brutally add stuff inside this energy. Because if I do this, I will, I will break the, I will break the fact that that this gives measure is a conditional expectation. And then this way this will destroy this nice proof of the machinery property I had. Okay. So, so still I want to, I want to add this linear term, but I want to preserve this, this kind of conditional expectation structure. Okay. So, maybe I write in right so this is an important step where we need to enrich the model and the. So, so the concept in particular, like for correct I see it was, it was kind of easy, you know you barely notice that I have added the correct term. But in particular for for spring glasses it's it's really subtle and not very easy. Yeah, it's not very intuitive or easy to figure out in general, how to make this addition, but. And look at it somehow. And okay I don't explain more but you see that we want some of the properties we had for Kibis that it's sufficiently rich that it will be, we will be able to relate derivatives, but also sufficiently simple that we will be able to compute what happens when only this extra bit is present. So we want to do some some linear things, we don't want to break this conditional expectation structure. So the way I will introduce it is, is, I will pretend that we, we also observe some some linear observation on X bar. So this is a two scroll of two H X bar plus Z. This is a Z. What was he is a. Was the, this is a is a standard Gaussian vector. So we have the independent of these other variables in the problem so now we have X bar that thing we want to understand. And we have w and Z, which are noise terms, and everybody is independent from the others. But by doing this thing by saying we also observe this, I will have a more, and now I recompute my conditional law of X bar, given this more, you know this richer observation, the observation of this guy why we had already, and also this vector. Okay, so now we observe these two things. What really happened is that it will define for us a new energy function if you want a new h, which now depends on T, H, and X. And well, the first party is, is as before it is this H and zero. And now we have an extra term, which is, which is linear index, and you see it's, it's against this noise vector. And then these compensating terms which I claim are just out there for our convenience. So, so let's not worry too much about this. So they are kind of similar to the other ones except, you know, there was a power for before and now it's a power to. So, so that these are just here for for convenience, and this is the important part so so if you think in terms of the co advice model. If you want it's like these are our stigma variables. And we have some sort of random magnetic field with this Gaussian. Okay. So, again, I try to summarize what we've done. There are the sources of randomness x bar, w and z. Out of these three variables. We build this function, which is here. It's a function of x. And out of this function we still build the Geese measure. Yeah, I'm not going to rewrite the definition of the Geese measure but it's as before. I just say exponential of hn of the age that. Okay. And now I can define my energy for energy which depends not only on t but also on h so fn of t and h is one over n. So I take expectation log integral. So you see it's this normalizing constant if you want or moment generating function. And then I divide by one over n as before. But there's one extra thing I do here which I take the expectation because as we've discussed this this quantity inside inside here this each time. It still depends on this, you know, x bar w and z. And so, and so then I take the yes. And so, maybe I want to define the two quantities maybe I want to define one version which is still random, and one version which is after we take the expectation so fn without the bar is no expectation and then fn with the bar on top is the expectation of it. So, so it's, you know, one extra complication compared with previous but to some extent it's analogous right it's a, we have these two parameters to an age and we have our partition function or, or maybe free energy maybe I should say that depends on t and h and I'm trying to understand the limit of this guy as n becomes large. Okay, and now I feel I have, I mean a better position to manage because I have these two parameters like in QA by smodern have t and I'm H which in some sense is simpler than the original thing. So here is the main result I want to present today is that's this function fn bar. Yeah, okay. This function fn bar converges to some function F, which, well, you guessed it is a solution of a certain partial differential equation. And so, so you've seen a lot of new things today already but at least when I write the equation that will sound familiar. Because it is the same as the folder to a by smodern. So there is a difference actually, which is that in the way we set it up now H has to be non negative. So he really this equation is only be posed with H non negative, while, while, before it was with H over R. And so this part is asking if the convergence is in the sense of local uniform convergence and the answer is yes. So in fact, all the, all the estimates we've seen for the previous model are also true here. So the functions are uniformly leapsy. So, so, so yeah in particular. So, so uniform. And another thing which is also as in the previous model is that the initial condition. In fact, this thing does not depend. If you said t equal to zero. It does not depend on. So you can just like this, like in the previous model. So, is the statement here. I'm going to spend the rest of the time which is perhaps not very long actually but okay, explaining the main steps towards proving this result. Thank you for the questions. So, so there are. There is really one fundamental ingredient in this proof, which to some extent is also very similar to what we've done for the cave X model. So, maybe I split this main ingredient into two bits. So, let's say the first part. Is that when we compute this quantity. It's also the variance of something no like for co evice was the variance of the magnetization. So, it's a variance of something a bit different but it's also a violence. So, there is this random variable X and I tried to explain that it's really not equal to X bar right if the observation is not very good in particular, it will, you know, you can think of it as almost an if you have basically no observation, then X would be a resampling of X bar and if you have complete information, then X would be equal to X bar. So, what is on the right hand side is the variance of this scatter product between X and X bar. So, you know, again, when you notice this, this, and this is an identity, you know, it's really an equal sign. And, you know, if you notice this, as in the trade by small that it gives you hope that as and becomes large, there will be a certain reconciliation somehow. So, hopefully, in the limits, the question will be satisfied with right and side equal to zero. Okay, and that's what the main results said. Okay, and then for the previous model, the, we had done this team, you know, first notice that it's a variance and then find a way to express this variance in terms of other things we hopefully kind of can can wrap up or understand. So, for the previous model, we had expressed the variance in terms of the second derivative. And here, it's not an equality but something very similar is also true. So, X dot X bar minus its expectation. So this variance thing I claim that it's bounded by one over N, the H2 FN bar. So that's, you know, if I have written an equal sign here, that would be really exactly like in the creative ice model, except the variable here is not the same but other than that, there will be really exactly the same. But what I return is not yet completely true. There's another term. And if you want this other term comes from the fact that the measure we care about this bracket thing. It's still random. So you have to kind of try to control this in some way. And so, so this measures the difference between FN and FN bar. Okay, so so in the creative ice model this term could not make sense right we only have one FN it's already deterministic so if you want it was there but it was zero in the creative ice. So, so, so I have to my goal now is to maybe justify a little bit more why the statement is true, and then explain how to close the argument. Okay. Once we have this, how do we conclude for the main result. So, so let me first focus on this on this first part. How do we see that this, these derivatives sum up to this varianster. So we can call this first lemma. So, here are some. Yes. Here are some observations when I differentiate respect to H. So you remember in the creative ice case, what we found was this mean magnetization. So in this case I claim that it's x dot x bar divided by M. So if you have a t derivative. Well then you guessed it, it's going to be the square of this. Okay, so if I just find the lemma then it's clear that when I look at this to the minus the H2 that is squared, I get the violence. Okay, and I'm not going to give a full proof of this but I just want to explain the key ingredient, which we can fancy to call it. It'll calculate so without it. It's very fancy name for for what it is but so yeah so I want to spend a bit of time explaining one ingredient for for the evidence identity. I'm reaching the bottom of the pain so so I will I will turn here. Yes, we just so calculus. So indeed it will just be calculus. And I want to display how to do that calculus thing. So, so let me start by observing this so if you know your your story I say calculus you know that when I take the exponential of a born emotion. And I will write my born emotion in a funny way. And then I subtract T. Then I get something of constant expectation. So, so here if I had written the born emotion attempt to T, I would have looked very familiar. Or at least I hope it is. So anyway, this is true. And so if I differentiate with respect to T, that should give me zero. But let's see what happens if I differentiate with specter this expression with respect to. So, I, you know, first I differentiate this term. So I get one over square root of two Tx. And then I differentiate this guy which gives me a minus one. And then I have x exponential of blah blah blah. And I get equals zero. Okay, so I get this by by just computing the tea derivative. And then I look obvious at all, at least, you know, in face value when you look at it you're like whoa that's, you know, we think, well we're lucky we know it'll calculate because how do we see that. So, so the point I'm going to display is no how do we see that this is correct. If we don't know about it to calculate. Oh yeah so I haven't said. Sorry, in this expression I mean that x is a standard guessing. I should have been very conscious in the way. So this I think of it as boy emotion because it's just to, you know, if x is a standard Gaussian I multiply by score of two T. It's a boy emotion at time to T. Okay, and then I differentiate and again. Sorry about the confusion. It's okay. I hope I call it back. I hope I don't get too much damage. So if x is a Gaussian I hope that you're convinced that the expressions are all down our true. So how do we see is true. Well, it's really something about gas ends. So I can write it explicitly. If I write X times this exponential of scored of two T X minus T times the Gaussian measure so exponential minus. And then I can square over to the X and let me forget about the normalizing constant, which will be everywhere so it doesn't matter. The important part is that you know this, these guys, they want to come together and be integrated that so again you know the only thing I know how to do is integrating by parts, and we integrate by parts. And what what this amounts to is to differentiate this part. So I can just the integral of the derivative of what was there so you see I can like remove this X if I differentiate the other function. That's the. And when I differentiate in X this guy gets quote of two T exponential of blah blah blah times the gas image. So if we go back to this, we see that the identity here is valid right because we, we can, you know, this X times X by blah blah we can do this gas integration by parts. And there is this quote of two to that shows up, and then we have one minus one is zero. And we happy. We've done our it'll calculus without it. Okay. So just calculus. So, so as a another exercise. Again, I hope I'm not asking for too much but I think this is also doable. So, so complete the proof of this lemma using the ingredients I gave you. Okay, so so using gas and integration by parts, and this Nishimori thing. So, second part is relate this and I see with this time derivative of fn bar. Okay, find a relation between these two things so that we are justified in believing that if we understand this, this T derivative, then, then we are good. We understand the minimal mean square. Okay. So, so that was for explaining the first part of the proof, like, why is it that this. Okay, it's on the other page, maybe I will not go back. I gave you expressions for these derivatives and essentially I did not give you the food for but I gave you really all the ingredients to allow you to prove it. And then, yeah, maybe I displayed in this mess of go back to the same number. Yeah, then I want to explain a little bit why why this identity is valid. Okay, so how do we bound. I will only be very brief again one minute I'll give you just a very short idea of why why how we can bound this with by these two terms. And how we show that this is small, you know, this one is like with the trade like smaller so, so we know how to deal with it in our argument. And the question is the remaining question is how do we show that this one contributes later. And, you know, I know that the time is basically over but don't worry I won't be long. I'll just give the fundamental of the idea and, and it will be very short. So, so the for the second part. So I'm going to use X now as some generic name for random variable, not necessarily Gaussian. So, so X, a random variable. I just want to do this. I'm going to use this as a position. Yeah, maybe I should write X, but I'm not sure of the terms. I could think X, a. Okay. So I have this, these two layers of expectation, and what I want to just observe is that I can split it into doing one expectation at a time. So, so first I could look at the variance conditionally on this extra outside disorder so. And I figured out that there's no E I just look at this partial average I compare a with the partial average of a. And that's not all there's another term, which will compare the, this partial average of a with this full average. And maybe you're not convinced that there's an equality but if you're worried about this, if you can write an inequality with some maybe with some tools in front so clearly the inequality is that it can be identified also valid but it's not crucial anyway. And yeah what I want to, you know the second part of the proposition is trying to control a variance like this, and the estimate is a sum of two terms. And this one is really looking like crazy, even if you forget about the is some sense is looking like the previous thing. So this one is the one that produces this second derivative term. Okay, so like for like for crazy advice. So this one is new, but this one is the one that produces this DH FN. So this one will be kind of related to that term minus DH of FN bar. Okay, so I mean there's some meddling around but. But it's, you know, it's very faithful to the spirit of how you derive this inequality, you split into these two terms, and then you analyze these two terms by trying to compare these two things and then this two things. Okay. And now the final ingredient I have to explain is how do we show that this extra term with this comparison between FN FN bar is small. And so that's a new ingredient that was not part of the previous model. But in some sense it's not a difficult ingredient, it's a very classical piece which is just a concentration estimate. So for instance you can use the F. And so now I stopped writing so that I promise I will stop. But the in the estimate we wanted it was about the derivatives of FN FN bar. And here I'm saying only that the functions are close not the derivatives are close. So, you know, how do we deal with that. And if you remember the key message of the conclusion of last time was that we need to control the error in this L one sense in this age variable after we average over age. And because we are allowed to do this averaging as well. In fact, we can do, you guess what, an integration by parts, and still use information about this FN FN bar to control this derivative term. All right, so I'm already a bit past time and thank you very much for your attention. And yeah, it was a pleasure to do this. Thank you very much. Jean Christophe for a wonderful sequence of videos let's unmute everyone and give Jean Christophe a big round of applause as soon as I remember how to unmute everyone. I know it's around here somewhere. Any other co hosts do that. They're gone. Thank you. So maybe we can. I'm going to stop the video now so that we can have further questions. And then everyone can remute them that would be awesome to help.
We consider the problem of estimating a large rank-one matrix, given noisy observations. This inference problem is known to have a phase transition, in the sense that the partial recovery of the original matrix is only possible if the signal-to-noise ratio exceeds a (non-zero) value. We will present a new proof of this fact based on the study of a Hamilton-Jacobi equation. This alternative argument allows to obtain better rates of convergence, and also seems more amenable to extensions to other models such as spin glasses.
10.5446/55640 (DOI)
I mean, at the same time, I apologize to everyone that I didn't reply to. It's not because I don't love you. I love you. Okay, but let's get to the talk. So today's talk is we are... If there are questions, please put them in the public chat. So today we'll be moving out of the 80s and getting to this decade, even though this particular argument is also from the 80s. And let's... But it had some important modifications from this decade. But let's start by describing what this argument is about. And I'm going to start with a very, very basic idea, which is using exploration for percolation, an extremely useful idea, which even if you don't get anything from this mini course except that you can do exploration, I did my job. I can go home happy. Okay, so what is... But I'm going to demonstrate the idea of exploration with a very, very simple limit. Okay, so let's see what this limit is. I remind you that C of zero is the cluster of zero, the connected component of zero, all the points which are connected to zero by some set edges, by open edges. So it has some open edges, some number of them, the number that connect them, and there are also closed edges, edges which have been deleted, which are basically the boundary of the cluster. Because the cluster extends to everywhere where you have open edges. And okay, lambda is some number. Don't worry about that. And here's the statement of the lemma. This part is not so important. I want you to concentrate on this part, on this piece of the formula that I've highlighted. It says that one minus p times the number of open edges minus p times the closed edges, number of closed edges, is very small. Basically it's square root of the size of the cluster. The formulation of the lemma is correct, but what I'm saying is not completely precise. So the probability of the number is bigger than 10 times square root is already e to minus the 100, because it's extremely small. And so on. It's true for any lambda. So lambda, we often use it for lambda logarithmic or square root log so that the probability here becomes negative. Okay? So if you find this confusing, think about it in p equal to half. In p equal to half, it means that you have more or less equal number of open and closed edges. Okay? And in p which is not half, you have to wait them a little bit. Okay? So let's see how to approve the lemma. The lemma is proved by a process called exploration. So let's understand what this means. Okay, we are going to define a sequence of sets of edges. Okay? The first set of edges, this first set has zero edges, then the next one has one edge, two edges and so on, until not more. And here's how we do it. So assume at step i, there exists some e, some edge e, which is not in si, but there is an open path in si from zero to one of the vertices of e. So maybe I should have prepared the picture, but I didn't, so you have to imagine it. There is si is some set of edges, some of which are open, some of which are closed. We are only interested in paths in si composed of open edges in si. If we can find some, and I assume that we can find some edge which is not in si, but which has a path leading to one of its edges. And if such edge, of course, there could be many. If there are many, we choose in one of them in some arbitrary way. If you don't like the word arbitrarily, then you could, for example, choose one, the first one in lexical graphic ordering or in any other way that you love, and define si plus one equal to just add e to this s. And if not, just fix it. Okay. But what happens? So what is the process this process doing? It's really exposing the cluster one edge at a time. Okay. You start with no edges, but you are at zero. So zero has two d edges surrounding it. So you, for example, take the first edge. If it's open and you add it to si, open or closed, you don't care. You add it to si. Then you take, for example, the second edge, open or closed, you don't care. You add it to si. After you finished all the edges surrounding zero, for example, then you see, well, okay, are any of them open? Well, if not, then that's it. That's the cluster. The cluster contains only zero and we can go home. The process stops. If there is, but if some of them are open, then you can continue to explore edges, which are connected to these new vertices, which we know, which we now know are in the cluster. So we are really finding out what is the cluster of zero step by step. Okay, I hope this explanation was clear. Let's say, let's go on. Now to this process of set, we associate the process of numbers x i, which is just the appropriate numbers that you can guess one minus p times the open edges, exactly like the term we have here. And p times the set of closed edges, exactly like we have minus. Okay, so in the end, we will get the x at x n will be exactly this quantity. And here comes the most crucial point of the argument, basically the whole argument. X i is a martingale. I hope, I hope everyone knows what is a martingale, but let me define it just in case a martingale is a process such that if you condition on it at time n and look what is added for, or i, and look what is added for time i plus one, the expectation of the addition is zero. This is exactly what is happening here. Why? Let's compare x i to x i plus one. X i, s i plus one is just s i union one edge. This edge is new. We don't know anything about it before we added it to the set. So if it's just some arbitrary edge, and we know that it has p probability to be open, and y minus p probability to be closed. So what's the expectation? It's one minus p times the probability that it's open, which is p plus, sorry, minus p times the probability that it's closed, one minus p. So you get p times one minus p minus p times one minus p. So it's just zero. And this is exactly what it means to be a martingale. Another proof follows immediately by a Zoomer hefting inequality for martingales. So okay, so let's see. So a couple of comments from the chat. Yes. Okay, let's see. Where is that? Ah, here's the chat. Let me see. Yes, the boundary exactly means touching the cluster, having one vertex in the cluster, or two, okay, the, yes, this is, somebody asked if this is intentional, this is independent of n. Yes, it's independent of n. It's exactly like, it's like asking what is the probability, for example, that the sum of n random variables is taking, for example, plus minus is bigger than square root n. This is independent of, this has an estimate independent of n. So there is certainly, so you should compare the left hand side to a sum of independent variables which are compared to square root n. I see that, no, I don't care about which order the edges are chosen. It doesn't matter at all for the argument. I see that somebody asked, I see some people have already answered some questions. Okay. Okay, so, a question from Lior. Just a little call, like we did last time. Raise your hands, people who want to see the statement of a Zuma inequality. So, just a comment from, to address Lior's question. So, Lior was asking whether you can look at the whole cluster and then choose the edges. Yeah, whether you can look at the whole cluster and then choose the edges. No, it's important that every step, it's no. No, you cannot do that. It's important that every step, the edge, you have no information on it. Okay, I see that there is, there are some people who want to see the statement. So let me show it. It's not very complicated. It says, suppose X i is a martin. And suppose it has differences smaller than M i. M i in our case is just one. Well, okay, if you want even maximum of P and one minus P, that's certainly smaller than one. Then the probability that this martin, okay, here I'm so in the usual formulation of to subtract X zero, but X zero is zero in our case. So the probability that it went bigger than M is X square exponentially small in M. Okay, so this is extremely useful inequality. So I should probably, I should move the chat window from my, from my main window. I keep forgetting that. Okay. Okay, so I hope, I hope the statement is, is clear. Let's let it, let. So let, let's say understand a little bit more about this talk a little bit more about this argument before we, before we apply. So it's really flexible. You can hear what I did was to start from zero, but you can start from an arbitrary set of vertices. You can put all kinds of complicated stopping condition. Here I stopped once I had N edges. I didn't go any further in order to have, to have, to have a statement, but I can stop by some geometric condition or any condition, which is a stopping condition can be used. So there are really lots and lots of variation. I will just show one, which is a bit closer to how we will use it in the next part of the statement. So suppose, okay, I remind you this notation, this notation for a cube, for example, from minus N to N in ZD. I remind the people who had too much fun in the last two days that all this course is about ZD and the precoloration in ZD and that this is a cube in ZD. So let S be the set of vertices connected to the boundary. And again, we define E the set of open and B the set of closed. And we again define X in this way. Then again, this N to the D half is the volume of the box. N to the D is the volume of the box and half is the square root. Remember, we have a square root here, right? So this is actually square root of the volume of the box. There should be a constant here, but it's not, sorry for the delay. Okay, so this, and this is both exactly the same in a way. You do exploration, but you do it in starting from the boundary and then examining the edges inside the box, one after the other, in an arbitrary order, until I discover all vertices which are connected to the boundary by open paths. So let me not, you know, discuss this. And if, okay, questions before I make this whole transparency disappear. If I have a reference, I need to think about it, but certainly this whole talk is all contained in the paper of surf from 2015. So that would be a good reference. But probably exploration is, should be, should have some more canonical reference, I think, if I know. Okay, so this is enough for, for, with the generalities and now we want to go to a more specific application of this idea. And what we will study is these events. So I have to introduce here a new notation, which is by the way not a standard notation is especially for this talk. I had to write a special letter code for these double arrows. So there are many people raising their hands. So I don't know if these people have questions raising people. There are people who want to ask questions. Okay, let's, let's take perhaps, okay, I see that. Let's, let's maybe I will take a question. A bit later, okay, I, I want to, to get to at least the main point before starting taking starting taking questions more seriously. Okay. So we are really just, so if something is, so if something is unclear, getting to the main point of this. Ah, for the moment, I will take questions in maybe 10 minutes or so. Do you want to say something? Yes. Okay. No, no, no. Okay. So let's understand what is this notation. Okay. So we have some subset E in ZD. Usually it will be a box and we have two subsets of E and I am making this beautiful notation that A is doubly connected to B. What do I mean? There are two, these joint clusters in E which intersect both A and B. Now be very careful that this is not the same as having two, these joint paths. We discussed that this, two days ago, this event and there was this BK inequality and so on. This is something slightly different. It's stronger requirement. Okay. So we are asking that there are two, A, that there are two paths, two open paths from A to B, but that they do not connect, not only are the paths disjoint, but they cannot be connected by some bridge that goes between them. They are really two disjoint clusters. I don't think I prepared a picture which maybe is a PT, but I think I can draw here, no, I don't have it somehow disappeared. It should have been here at some point, but it's okay. So be careful about this distinction. I'm going to estimate this probability. I cannot estimate it without the requirement that the clusters are disjoint. This is very important. Keep it in mind. Okay. So I hope the notation is clear and very often we will have this again, usually for a box. And in this case, I will not write in which space this is. So I will write it this way. If A has two connections to the boundary of E and the connections are in E, of course, and the cluster are disjoint in E also. So I hope this notation is clear and let's get to the statement of the theorem. Okay. So again, I remind you this is an n by n box, n by n by n, okay, the times box. And let's examine the event that just x, y, they are just two neighbors, two vertices which are neighbors. And since I've already made the definition disappear, let me repeat it. What does it mean, this double arrow? It means that they are connected to the boundary of the box by two clusters in the box which are disjoint in the box. In particular, if they are disjoint, then x, if x belongs to y, one of them, and y belongs to the other one, then x cannot connect to y. So this notation and the, okay, sorry, and this sentence are equivalent. Sorry, you don't want, I want just this sentence, not the top that insists on being selected as well. Okay, so x is connected, both x and y are connected to the boundary, but they are not connected among themselves. So it's exactly the same thing. And the statement of the theorem of Eisenman, Kestan and Yuma is the expectation of this set is smaller than m2d minus half square root of, okay, so let me take a few questions now and people can stare a bit at the statement of the theorem and make sure that they understand exactly what is it. Okay. Okay, so, so, so, so, so, yes. So I see that there was a following question, can the clusters meet each other outside lambda? Yes, I'm not forbidding that. This is important here that they are not connected in lambda n. It's possible that x is connected to the boundary, y is connected to the boundary, and afterwards outside the box, the clusters continue and continue and connect. This is not a problem for our event. So I'm only requiring that x does not connect to y inside the box, outside the box that allow to continue and connect. Okay, let me see if there are any other questions which are, okay, I don't see any new questions. Were there any questions from before in the chat that I should answer? Did one of the moderators notice a question which you think is interesting for? No other questions in the chat. Okay. Okay, so let's start the proof of the theorem. I think after this we have our break. Okay, the proof I'm giving here is due to Gandolfi Grimmett and Rousseau. They simplified the argument of Eisenmann, Kirsten and Newman incredibly. This paper is just four pages. By the way, that's another very, very nice reference. People ask for reference. So this paper is probably from 87 or something like that, but anyway, you will not have a problem to find it and it explains this very nicely four pages later. Okay, so let's see how the proof goes. So we define xs, you won't be surprised by this definition at this point, is 1 minus, for some arbitrary set of vertices, to be 1 minus c times the number of open edges between vertices minus p times the number of closed edges. Now be careful, okay, we need one vertex and x and both vertices in the box, so there is some issue here near the boundary. I've written it properly, but it's not actually important. So I hope that the definition of x is clearly, it's certainly written quite clearly. Now let's see one, c2 and so on, be all the clusters in lambda n. So okay, clusters again is a set of vertices which are connected and all connected in lambda n. So a cluster in lambda n means that the connections are in lambda n, things which are connected outside are not considered the same cluster in lambda n. Let's touch the boundary. So clusters which are just some cluster in the middle, I don't care about it. I care just about clusters which touch the boundary of lambda n. Then, okay, this is the proof. So this formula is the essence of the proof. So let's make sure we understand it perfectly. Let's see what we have here. We take all these clusters, we calculate the x's and sum. We take the union and calculate x of this union as a set. It's a set, all of these are sets of edges, the union is a set of edges, set of vertices. These are all sets of vertices. The union is a set of vertices and you can apply x to it. And this is in quality. This is not so you can understand that argument is probably simple but confusing. So let's see why this is correct. Let's see what an open edge contributes to this difference. If it's open, then it's necessarily in one cluster. An open edge cannot belong to two clusters. It's all and it doesn't belong to boundaries. So it contributes one here at the cluster at which, not one, but one minus p. So it contributes one minus p here at the cluster at which it's contained and contributes one minus p here because it's contained in the union. So its contributions cancel out completely. Now let's see what you get from a closed edge which belongs to the boundary of exactly one cluster. So it's closed and one vertex of it belongs to a cluster that touches the boundary. And the other edge can belong to the same vertex, to the same cluster or can belong to a cluster that doesn't touch the boundary. I don't care. In both cases, contributes one here, well, okay, p here and p here and again, the contribution cancel. So what doesn't cancel? What is left? What is left is exactly this event because if x, if the edge x, y is closed and x is connected to the boundary and y is connected to the boundary and these are different clusters, then it appears once here and twice here. So you understand that I made a mistake. There should be a minus here. Okay, so I apologize for the missing minus, but no, actually, there's no missing minus because this is a, actually I didn't make a mistake. It's minus, contributes minus p here and two p here. So it's actually correct as a. This is about this equality. Yes, everything is clear. So let's continue. Now we apply the exploration argument. The exploration argument tells us that x of each one is smaller than square root log n. I warn you that we will have lots of logs. This is because I don't want to, you know, these logs sometimes can be avoided, but once you allow only square root log n with some constant, then this means that the probability you can make it like n2 minus 100, then you don't care about these events. So that's always the probabilist cheap way out is to put a square root log and have the probability negligible. So I'm using this exploration argument. Maybe let's look at it just for a second again. We flounder equal to square root log. And then here you get e to minus some constant log n. So the probability is polynomial in n. Okay, I'm back here. So we've put, so allowing the square root logs. And I know that with high probability, meaning say probability bigger than one minus n2 minus 100, both are bounded by the square roots of the size. Here I did a quick job and just bounded it by the square root of the volume of the whole group. Okay, so this transparency is the same as the previous one. I just compressed some formulas for more space. You don't have to read it again if you follow the last transparency carefully. Now let's use Krzysz Warc. We need to estimate this sum of the x's. And each x is bounded by square root ci. Okay, some of those square root logs. So we need to understand the sum of the squares of the clusters touching the boundary. So I apply Krzysz Warc. I write it as sum of this times one. And then I get sum of the squares, okay, square root and sum of the ones square root. Now the first term, this I can simply bound it by the volume of the whole box, which is n to the d. So this is just the first time I just bound by the square root and d. The second term, I bound, recall that we are interested in clusters that touch the boundary. I don't have any luck with selection today. Touch the boundary. Okay, so how many clusters can touch the boundary? Certainly each cluster can relate to at most one boundary point. So the number of clusters that touch the boundary cannot be bigger than the size of the boundary. So you can bound it quite trivially by n to the minus one. So what did we get eventually? And this is the end of the proof. Okay, we know that what we are interested in, this guy v, is equal to this contribution. With high probability, this is smaller than even much has a much better estimate, n to d half, not n to d minus half. And this sum has this estimate and those are the square root logs. So you get exactly this. And the last sentence, he just says that, okay, everything happened with high probability. Let's say that this with high probability is if probability one minus n to minus 100 and then whatever contribution you get with probability n to minus 100, you don't care about it. It will be negligible compared to the other point. So that's the whole proof. And I certainly think you deserve a break. So let's take five minutes break and after that I will take questions and then we'll have some other entertaining stuff to fill the rest of the hour. Okay, so five minutes break. Okay. Thank you. So if there are any questions, please post them on the chat and we will resolve them as soon as Gadget turns. So that was the first question. That's the next question. I believe. Yes. So there is a question here whether the bound of the number of clusters can be improved. Right now, only very slightly. I hope I will return to this question in the end of the talk. I think this today's talk is going exactly as planned. So I think I will. So if it continues this way, I will answer this question in more detail. So the question, but again, let me repeat the question. The question was, can we improve this bound? And I will answer, I will refer to this question at the end of the. So what was the main result in the Gandalfi Grimett Russo paper? Is this the main? I think the main objective of the paper is. Let me check the paper, but that's what I remember. Why can't I find that? Okay, just a second. You don't see my window now, do you? Just a slide. Just a slide. Good. So Sarah has posted a link to the Gandalfi paper on the chat. Okay, good. So maybe there are a couple more questions. So, you know, whether the bound of end to the minus half square root log n can be shown for a single edge in the back. As opposed to the sum of all the. I will get to that in a second. And Dio is asking. The Dio. No, the. D minus half. It's certainly a smaller error. Right? Look, our D's are at least, okay, at least two, let's say. It's an integer. And it's at least two, because one dimensional calculation is not so exciting. So in this case, this term and to the half is certainly smaller than and to the minus half and it's a negligible. Oh, yeah. Okay. Yes, this is the minus half. There is no parenthesis here. It's just, you know, it's not D minus one divided by two, just D minus half. Maybe I should have used a more clear. But then I wouldn't have place for the whole proof. Okay. I think five minutes break is over. And people who haven't been back with their coffee can only blame the selves themselves. And let's say, yes, there was a yes, but yes, but this is also to both Eisenman's Pneuma. Maybe I think maybe Gandolfi Grimett and Rousseau simply gave a short of proof. Okay. Okay, so, but let's, let's get to a caller that was already asked in the chat. You can get from this estimate on the number of edges and estimate for single edge. Why? Because if this probability, you have to play here a bit with N and two N. There is some, by the way, could you hear me now? I heard, I got a message. Okay. You have to play a bit with N and two N. The N here should be two N. Compare, if you want to use the, to conclude the corollary from the theorem. So how do you, how do you get the corollary from the theorem? You use the theorem with two N. But then, oh, so if N half, but then if some edge is connected to distance two N, then in particular it will satisfy this. So the probability of that, so it just, it just follows from that from replacing N and two N and dividing. It's a very straightforward. And a few more corollaries of the theorem before, before we continue. First of all, it's, that's also a reasonably flexible argument and people have been doing variations on it quite successfully. Here's one that I particularly like for no obvious reason. If you take a box and take L to be, okay, and you do the exploration from the left, you do the exploration from the right, and you take the union and then do the structure and then you learn something about edges which are connected to the left and to the right. Let me not explain exactly what there are some, this version has more subtleties than the, than the, the, the version that I showed you and the term, I guess some, some said he was teaching. So won't come today, but he has a version where he does explore the exploration, not, not from the boundary of some set, but from the, from random points in, in his favorite setup where you have a non-aminable group. So there are lots of variations on this argument that you can do by exploring from various places. Okay. Okay. So as I promised, I want to get today to a, some from this decade. So you have to, so I have to take you back to the previous hour. This theorem was approved in the previous hour. I hope people remember. I, and, but let me state it again. So suppose S is some finite set in ZD, then the sum over all X's in the boundary of the probability that zero is connected to X inside of set S is bigger than one. And this is already to a critical, only at criticality views for this argument that we are at PC. The argument that I showed you before about W connections works in any P, except at some point I divided by P and didn't make a fast one. So it works on any P say not too close to zero or something. So a, a, it was not restricted to PC, but this argument was restricted to PC. And this we show, we saw in the previous hour and the, it has a few applications. I noted here too, one, one of myself, if a sub-machnias from 2011, I hope you can read the statement, I'm not going to read it out loud. And the second is the lemma that is the second ingredient in this, a, a Eisenman-Kessenjuman surf paper in this paper of surf, which uses the Eisenman-Kessenjuman argument. And it says that if you take any two points in a box of size N and you, then the probability at criticality, this is not true for P smaller than PC, then the probability that they are connected in a box of size two N, you have to increase the box a little bit, it's not the end of the world, is only, is bigger than some polynomial. And there is a remark here that all constants depend on the dimension, which also holds for what we had before. Okay, so, so again, this is certainly not true for P smaller than PC. In this case, the probability could decay exponentially. Okay, but it is true at criticality for any dimension. And it's a corollary for this, from this theorem in a way that I'm going to show you now. Okay, so what I want to show you now is the proof of this lemma. Okay, here it's written again. Let's assume first that X and Y are on the same line. So I'm assuming that all the coordinates are, except one are identical. And also there is some assumption of that the difference is even. That's not so important, but it's, it will make life a little bit easier. Okay, so let's just assume that. Let's see what we do in this case. Now, we know that the sum, okay, we know from the previous theorem that the sum of these probabilities is bigger than one. In particular, there is at least one, which is bigger than the number of elements. Number of elements is just the size of the boundary. Okay, so there must be at least one z, such that this probability is bigger than, this two d here is a mistake, then the size of the boundary. I'm left over from something. Okay, so and the size of the boundary is basically K, okay, yeah, it's K not N. I'm doing, I'm using the previous theorem in K. Well, K is this, is this guy, the guy that is the difference between X and Y. So I'm doing this, the difference between X and Y is 2K and I'm putting a box of size exactly half, or not size, but okay, of size exactly 2K or, but lambda K is for minus K2K. So size is actually K. So the size of actually 2K, yes, so the size of the boundary is K2D minus one, sometimes some constants that depends on the dimension we don't care about. Okay, now we use the symmetries of the lattice. Okay, the lattice has a, our lattice has a, you can rotate and you can reflect. And what I'm mostly interested in reflection. So I'm going to reflect, so first of all, I'm going to use rotation symmetry to ask that to ask that this D would be in a given face of, of, of the cube. In this case, the face which has, you know, whose that the first coding is K, right, every face of the cube is to be, to choose a face, you choose one of the coordinates and then you choose if it's K or minus K. So I'm choosing the face where this is K and, and the end positive. So this is just a definition of, so we are allowed to make this requirement because of rotation symmetry. Now let's reflect through the first coordinate. Okay, so if Z is Z1, Z2 up to ZD, then Z bar would be minus Z1. And again, by symmetry of the lattice to reflections, we also have that the probability of zero to connect to Z bar is also bigger than CK. It's the same. I'm just, just written in a more compact way K1 as D or D, over D minus one is the same. Okay, I hope this part is clear. This was just use of reflection and rotation. Now translate this thing. We translate the first guy to X and the second guy to Y. So you get the probability that instead of zero connected to Z, you get X connected to X plus Z. And instead of the connection being in lambda K, it's in X plus lambda K. I remind you that this is a notation for translation X plus lambda K mean translate the whole box by X. So this is exactly the same as this. And the other guy, the zero connected to Z bar, we've translated to Y. So instead of zero, you get Y instead of Z bar, you get Y plus Z bar and stuff. Lambda K, you get Y plus lambda K. And both these numbers are bigger than some constant over K D minus one. But this is the same point. Okay, notice very carefully X minus the difference between X minus Y is just two K zero, zero, zero, zero. Z one is K Z bar is minus K and the other coordinates are the same. So X plus Z is the same as Y plus Z bar. Okay, just by the way, we constructed them. Okay, again, I'm okay. So I hope you understood that it kept from the previous slide only this conclusion that and I already replaced Y plus Z bar by K X plus Z because we just said that the other thing was the same point. But everything else is exactly the same as the previous transparency, including the boxes. Okay, now both these boxes are contained in our box lambda 2N. Remember that eventually you want, we are working in this bigger box. This is why we need the 2N. Without the 2N, this is not too, we'll be careful here. They can be as large as N and X and Y can be anywhere in the box of lambda N. So in order to have this book containing some other books, you must extend that other books. This is exactly why we have 2N. Okay, and the same thing holds for Y plus lambda K. So if we replace X plus lambda K plus lambda 2N, of course, the probability is only increased because we added the, we increased space. So it's easier to connect if you have more space, right? Okay, and now we are going to use the FKG inequality. So since last time people were not familiar with BK, I will skip the poll and go directly to explain what is the FKG inequality. The FKG inequality, maybe I will put it on. It says that if you have two increasing events, then they are positively correlated. The probability that both occur is bigger than the product of the probabilities. Okay, so let's say just go over the definition very quickly. So a function is called increasing. If it increases in, so a function, I mean here, something that, a function on configuration. So a property of the cluster and any two positive, okay, just a minute. So this is positively correlated. So replying to us, ask if I really need FKG here. I don't really need FKG here, but I will even make steps. All of you put it up. But you're likely to this point, I didn't really need FKG yet. Okay, but so just as a practice for what is FKG, these two events, the probability that X is connected to Y is certainly bigger than the, oh, this would still be X plus Z. So, okay, but anyway, it's the same. So you don't, you're not, you shouldn't be angry with me. So because if both occur, then X and Y are connected. So this implies this, so it's a smaller event. So it has a smaller probability. And then by using FKG, we know that both of these events are increasing. If you add more edges, then certain, and it was connected before and you open more edges, then it's still connected. So this is an increasing event. And the same is certainly true for Y. So this is bigger than a constant than the product of these two guys. So some other constant, of course, this C is of course not the same as this C. It's this square of it times K to D, 2D minus 2. And this proves the lemma in this case, that X minus Y are on a line and with an even distance. So let's, let me not take questions and just finish the lemma before, before taking questions. So we proved this in this case. And let's take a slightly smaller C. We can also remove the requirement, the distance is even. This is easy. You just take a neighbor of Y that if X minus Y is odd, then you take a neighbor of it, which is even and then open one edge and you can use FKG again. What happens if they are not on a line? If they are not on a line, then I'm going to construct a sequence of points going from X to Y by exchanging each time one coordinate. So X0 is the same as X. And X1, I replace the first coordinate by the code of X by the first coordinates of Y. Then in X2, I replace also the second. And in X3, I replace also the third. So I'm replacing one coordinate after the other until I get eventually 2Y. So this process of replacing coordinate by coordinate, each XI, XI plus 1 differ by exactly one coordinate. So we can use the case that we have already established. The case we have already established says that the probability that I connected in lambda 2M is bigger than N222 minus 2B. And I use here that they are both in lambda N, which is the, but this is easy because being in lambda N means that all coordinates are between minus N and N. And so each hybridization of X and Y also has this probability that all coordinates are between minus N and N. So all the otherization, all these guys in the middle, they are also in the box lambda N. So we can use after, what happened here? The rotation that the case that we have already established. Now we use FKG X and here we really need FKG. We can't avoid it. The probability X connects to Y is certainly bigger than the probability that they are all connected because that's certainly harder. And now by using FKG, we get that all these events are increasing. So this is bigger than the probability. The product of the probabilities and you get a constant again, cause not the same constant as before, but some power of it divided by 2D square minus 2. Okay. And that's the end of the proof of this. Okay. So, so I think I only have like a minute and a half something like that. So maybe I will make just a few more comments and then take questions in general about the whole talk. So, so first comment is that this was improved very recently, just this year, but to end to minus D square. So there is certainly a lot of interesting to improve, to improve this, this value, but minus D square still doesn't have any spectacular corollaries, but improving it enough will be important. So, so certainly this is interesting and interesting as that the proof uses the power fixed point. So that's, that's quite, so that I quite like this paper. Let's go back to where we were. Now, from, from this theorem, which you already proved that then that the, that the probability to have two connections between two points is, and to a, is something like square root n and the lemma that I just explained, we can get the theorem of self. I will not certainly not do this last part of the proof because I have 30 seconds or so, but let's just read the statement. So the statement is as follows. Now we are must be at PC. This is no longer a true in a, okay, this is, but not, but at least the way I had in mind to prove it, it's only towards PC. Now I, okay, maybe I will compare. Let's compare to the corollary before the corollary before was that the probability that two neighboring points are connected to distance n is smaller than square root n. Now I'm making a much, a much bigger, I'm allowing much bigger separation. I'm saying, let's take a box of size n to power say one over 10. It's small, but not, but not, but not just two points. It's not microscopic, it's kind of, you can call it mesoscopic. It's some kind of intermediate scale box. And to some small power, certainly this constant is smaller than one. Then the probability that it is connected to the boundary of the box by two disjoint cluster is small. So, and this theorem is proved, the corollary did not need criticality, but this theorem is proved using the lemma that I just proved to you, which does use criticality. Therefore the theorem is already at least, okay, actually it's true in any P, but at least in the most naive way and certainly in the way self proves it, it's only to implicit actually the generalization to arbitrary P is in our paper, my paper with, with Hougre and Bansand that we put on the, on the archive this year. Okay, so I certainly don't have time to list proof, so let's keep it, but in the end after the proof, the proof combines these two elements. So this is the same theorem except I made the, the two, the constant explicit, the proof combines a, but I will just say in one sense that this proof uses the lemma to get from the probability that a box is connected to the boundary by two disjoint clusters to the event that two points are connected by two disjoint cluster by an argument that's called a patching argument or sometimes it's called an analysis argument. It's not important. Some kind, one of the more or less standard arguments, you'll have to read it in the paper, I will not do it today. And, and as I said, the theorem actually holds for all P, but this is not in the paper of self, this is a, but, but essentially the proof is the same. Now I want to answer a question that was asked before I finish, before I take more questions, let me answer a question that was asked in the, in the chat. And this was the question whether it's possible to improve the bound on the number of clusters. And this was the idea of self. He has had a complicated scheme in order to improve the estimates. And let's, let's understand the scheme just in a very, very general way. So let's, this is the error that I showed you. If you have an estimate on the number of clusters, then you have an estimate on this quantity, the probability the two points are connected to a, to a, to the boundary. And if you have a better estimate on the number of cluster, you get a better estimate for this because the sum of the square root of the class, the sum on all the clusters of the square roots of the sizes appeared at the crucial point. So a better estimate for the number of cluster would give you a better estimate for this. A better estimate for the probability the two points are connected would give you a better estimate for the probability of a little box because the probability of a little box since doubly connected is small is, as I said, it just follows from that and a patching argument. Okay. I didn't explain that, but at least you can imagine more or less what is the argument. Once you have a better estimate for that, you can actually get a better estimate for the number of clusters because if you don't have many boxes that connect to distance, it means that the clusters must be separated by a polynomial factor. And then you can repeat and get better and better estimates in a, in a, okay. I'd actually convert this to some estimate which is better than what we had better than this and also most importantly, this would be better than a half. Okay. I hope you've got something from this picture. I understand it's not, but if you want to understand this part of the argument, certainly you can only understand by reading the paper itself, but unfortunately the end result was a little disappointing. It didn't converge very effectively and what he got, for example, for this quantity, the probability just the two points are connected by these joint clusters to the boundary is, for example, in dimension three, 12 over 23 instead of half. So that's a little bit disappointing, but, but maybe it would be important. Okay. So I think that's definitely what I wanted to say for today. So, so let's terminate the, the talk. I think I'm even a few minutes, yeah, I'm certainly a few minutes over time and I will now only take questions. Okay. Thank you. So we will unmute and everyone to thank Gadi. And I will unmute just Gadi now. Okay. So I will at this point stop the recording and we can have more questions. And afterwards people can join breakout rooms for informal discussions.
Critical and near-critical percolation is well-understood in dimension 2 and in high dimensions. The behaviour in intermediate dimensions (in particular 3) is still largely not understood, but in recent years there was some progress in this field, with contributions by van den Berg, Cerf, Duminil-Copin, Tassion and others. We will survey this recent progress (and a few older but not sufficiently known results).
10.5446/55642 (DOI)
Okay. Okay, and we are live. So, and also start the, you started the recording. Okay. So, I want to say today, welcome everyone to the fourth session of the week. So today we will have two lectures on both of being recorded. So if you don't wish your audio and video to appear in the recording, please keep your microphone. I'm going to switch stuff. And let me just also mentioned that next week we have rather full week of material with mini course from Ivan Corvin Monday, Tuesday, Wednesday, followed by several short lectures on related topics on Thursday and Friday. So there will be an email to the mailing list about that this afternoon. So with that, I'm going to hand it over to Nina Gantert to introduce the speakers over to you, Nina. Yes, thanks, Luigi. So today we will have two talks and the first speaker is Piotr the chef's kid. The second is Sam Johnston, you can see them both here. And Piotr, who is a postdoc in Munich right now and also in Wroclaw will start and his topic is, his topic is a branching random walk and stretched exponential tails. And the screen sharing works for me and I hope for you as well. Okay, Piotr. You can start. All right. Thank you, Nina. Yeah, so for the most of the Nina's lecture, she focused on on a branching random walk when where the displacements have fin tails. And what I want to do today is to show you how you can work with branching random walks whenever your displacements are heavy tails, have heavy tails. And what I mean by that is whenever the, you know, the exponential moments exist or not. So first off, let's just recall the setting. So I will, we will work on a real line. And we start with one particle at the origin. And after some time, this particle dies and gives birth to a random number of new particles. And we're going to assume that the reproduction law is the reproduction of the particles. It's governed by a supercritical Garten Watson process. So this in particular means that the mean of the reproduction law is is greater than one. All right. And we also want to assume that the displacements of the particles because after the particles are born, they are moved somewhere on the on the real line. They alter shifts are an IID copy of one given given random variable with so-called stretch exponential law. So this means that the tail of X decays as e to the minus t to the power r. And this and this twiddle means that the quotient of, you know, the both sides tends to one as t tends to infinity. So instead of, you know, giving you a full scope of what happens with heavy tail displacement, I'm just going to focus on the stretch exponential case and give you the whole treatment. But most of the phenomena's do replicate in with different tails. All right. So, you know, the death particles are removed from the system, right? And then they reproduce as previously. So they produce some random number of other particles which are then placed on the real line. And, you know, this system evolves for some time, according to this rule. And, you know, after some time, you do get some you do get a collection of particles on the on the real line. And the aim is to somehow understand how this this collection of particles evolves, right? And the first question you want to ask is the behavior of the of the extremes. So in our case, this is going to be the position of the rightmost particle. And this, you know, this case study of stretch exponential displacements basically boils down to explaining the math behind the result that I did obtain with Nina Gantt and Thomas Hufflesauer. All right. So Nina does Nina needs no introduction and Thomas is her former PhD student. And if my memory serves me right, he did graduate two years ago and now he works in the private sector. All right. So under some technical technical assumptions that I do want to do omit because I don't want to make my slides too complicated. We have the following. So assuming that the law of this generic random variable is, you know, such exponential and that this guy is centered and has variance one. And then it turns out that you have two different regimes of the behavior of your rightmost particle. So depending on whenever r is smaller or greater than the critical value of two thirds, your rightmost position either either fluctuate or you have almost true behavior. So in both cases, the leading term is the same. This is alpha times n to the power one over r. Right. But for smaller values of r, you know, the rightmost particle fluctuates in the neighborhood of this value with a limit being the shift of a Gumbel law. So here this w, this random shift is a martingale limit associated to the underlying Galton-Wolson process. And we will see exactly how this, how this guy pops up. And whenever r is, you know, big, you have more balanced behavior, which translates to almost a limit. All right. So let me just introduce the, you know, the setting that Mina did, did, did, did give in order to be self-contained. Let zn be the underlying Galton-Wolson process. All right. So maybe I, maybe I want to go back and say two things. So first of all, my aim here is to, you know, somehow explain where this exponent are coming from and where this different type of behavior is coming from. And most importantly, I do want to give you some high, some idea of how can you use the heaviness of your tails to your advantage when you study the branching random walks. All right. So as I said, let zn be the underlying Galton-Wolson process and assume just for the simplicity that this process never dies out. So assume that the probability that your descendant has no children is zero. And if we don't have this assumption, then everything that I'm going to tell you will still work. However, you will just have to condition on the set of survival of your population. All right. So then you can draw your Galton-Wolson tree, right? So then you assign labels to each of the individuals, so on the individuals. So let's say that, you know, the first particle has a label with, which is an empty word. And then particles from generation one has a label, which is a string of length one. Then particles from generation two has a label, which is a string of length two and so on and so forth. So once you have those, once you have your Galton-Wolson tree, then you take the displacements. So you take those, a family of IED random variables and assign them to the edges of your Galton-Wolson tree. So you think of, you know, of your labels, of your axis as the displacement of a given particle from the place of birth. All right. And then, you know, if we are interested in a position of a particle from generation n, then what you need to do is, you know, you just look at the label on your tree, look at the path from this particle up to the root, right? And you just sum all the displacements along this path, right? And then you get this quantity, which I'll denote by Vx, which represents the position of the particle with label X. And now, you know, your rightmost particle is just the maximum of those guys. All right. So one technical thing that we're going to need is the large deviations for random walks. So you've seen that in Nina's course, and she did use large deviations in the case when the displacements have fintails, when you have some exponential moments. And this is basically a common knowledge, right? In the case which I'm looking at here, that is the stretch exponential case. This is not so well known. So let's just look at what's happening. So the lack of these exponential moments morally, this translates to the so-called principle of one big jump. Namely, you know, if you look at a statistic, a reasonable statistic of your ensemble, then, you know, the probability that this statistic is big in some sense is asymptotically equivalent to saying that one of the guys in your ensemble is big. So for example, if you look at the probability that the sum of the two guys is big, this is asymptotically equivalent to saying that one of those guys is big. Now, we want to somehow try to push this phenomena and see if we can get some large deviations result out of that. All right. So let's take an SN to be the sum of those X-case, right, and take the maximum and denoted by XN star. And now we want to understand, you know, what the probability that this SN is greater than some threshold TN. And, you know, as it will turn out in the future, we don't need the, you know, the full scope of TN. But it will be sufficient for our needs to take T of the form C times N to the power one over R, right? So we want to somehow exploit this principle of one big jump. And naively, what you can do is say, okay, so the biggest displacement does something, right? And then the other displacement somehow contributes, right? So you can naively try to write this supremum and just see what happens. Yeah, so we know what is the behavior of this maximum, right? This is exactly this part. And then you check what's the behavior of the rest of those guys. And since, you know, you think of those guys as the typical displacement, right? You expect a Gaussian behavior from that. All right, so you have some expression in your exponent and you just simply optimize this guy, right? So you get this, it's exactly that. All right. Yeah, so now when you look at this probability of typical particles, what you can do is since you expect this SN to be close to Gaussian, you can write, you know, SN minus one divided by square root of N, and this is supposed to be greater than, you know, this constant that we somehow get times N to the power 3 over 2 minus 1 over R. So you can actually see from that that the value of R is being equal to 2 thirds. It's somehow critical for the behavior of the deviations. And this is exactly, you know, this critical parameter that we have in our result. All right, so you write down what's the behavior of the product, right? And, you know, this leading term from the maximum translates to the leading term in the exponent. And then those smaller guys somehow transfer to something smaller. The key fact here is that the leading term is the asymptotic of the deviations of SN really comes from the maximum. And then depending on the value of your parameter, then you can get something different or not. All right. So what can we do with that? Well, we can redo the first standard term of the asymptotic of the rightmost particle. Again, something that Nina did in the previous course, but still this will help me drive my point. So let's just do that. So the probability that MN is greater than, you know, C times N to 1 over R. And, you know, 1 over R is the correct scaling because, you know, I said so. So you look at the probability that one of the guys is greater than this threshold. And you can use the first moment method to, you know, bound this basically by expectation of the number of guys that are greater than this threshold. Right. And what you get is the expectation of ZN, which is roughly N to the N times your deviation. And this is something that we did compute just now. So what we get is, you know, asymptotically N to the N times E to minus C to the power R times N. So now we need to tailor this answer that this term compensates this N to the N. Right. And this leads to basically a prediction of the first other term being N to the power 1 over R times times alpha. And this alpha is exactly given here. All right. So what we can learn from that? Well, everything, you know, this first term really comes from this parameter, right? And we just said that, you know, this leading term, this asymptotic of the deviation comes from the biggest displacement. So we can try to argue that the first term, the asymptotic expansion of the rightmost particle is related to the biggest displacement in your collection. And this biggest displacement, right? This is just a maximum of IID guys. And the only twist is that this, the number of guys that you are considering is random. But since those two quantities are independent, this is still easy to handle. So you know that after a proper centering and scaling, you will see a nice limit, which is, again, a random shift of the Gumbel distribution. And this is something that's so easy that we can actually try to try to prove that. So, for instance, take some threshold XN that goes to infinity. And let's just try to compute the probability that this biggest displacement is below this threshold. Right. So what you do, well, you just condition on your tree. Right. So this leads you to writing that basically you are saying that you have a random number of guys that are supposed to be smaller than this threshold XN. And this random number of guys is just the number of the edges that you have up to time N. Right. And this is easily expressed in terms of your branching process. Right. And then, you know, looking at the expression under the expectation, it behaves as e to the minus the product of, you know, this YN and the tail of X. All right. So now, this, now the random shift comes into play, because we know that this YN behaves pretty nicely. That is, after scaling by N to the N, you get a martingale. And since this guy is non-negative, it will have almost a whole limit. Moreover, since I'm implicitly assuming that the Kerstens-Tigerm condition, this limit will be non-negative. So this Kerstens-Tigerm condition is one of those ugly assumptions that I just omitted in the second slide. All right. But now, you know, we know what the behavior of this YN, right? This is, again, something that grows in the same fashion with a random shift. All right. So now, we can take a very concrete threshold here and just start to compute things. So first off, we do only the simple bits. I mean, first off, we just take the asymptotic equivalent of YN. This is here. And we just plug in the definition of XN. So nothing happens. Then, we look at the exponent on the right-hand side, and we simply use the Taylor expansion. And when we do that, we notice that, you know, this exponent here is Taylor, such that it will cancel with a derivative that pops up from the expansion. Right? So you're left with a constant. All right. But now, we just need to recall what is the role of alpha. Alpha is taken such that this guy will cancel with n to the n. Right? So we are in business. We are in business, because if we call our constants, you know, constants accordingly, then we get a nice limit. And if we go back to what we have done, this basically shows that, you know, this biggest displacement after proper centering and scaling does converge to this nice random shift of the Goomba law. So as you can see, the analysis of the biggest displacement is pretty straightforward. So what can we do with that? Well, when we look at the first order asymptotics of the biggest displacement and the rightmost particle, they are the same. So it's not difficult to convince yourself that if you look at the positions of the particles, which didn't have a big jump along the branch, those guys are negligible somehow, right? And if you have that, then you know that you need only to concentrate on the particles that had a big jump, right? And how many are there? So all the particles up to time n, which we see in our system, the number of those is roughly m to the n. And then you look at, you know, those particles that did encounter a big jump, and those will be fewer, right? And it turns out that whenever r is small, you only see a polynomial number of these big jumps. And whenever r is big, you see something which is almost exponential, but not exactly exponential. And I do know that this line is somehow vague, because I don't tell you what do I mean by, you know, x having a big jump. So the takeaway message from this line is that for small values of r, we have small number of big jumps that contribute. And for big values of r, we have a lot of big jumps that can contribute. All right. And in the end, you know, those particles that did give a big jump form a small subset of the whole tree. So what can we do with that? If we look at the whole tree, the first step that you want to do is look at those particles that, you know, with the displacement comparable, you know, to the first order asymptotic. And color the edges green. Then, you know, you color the edges which are below in red. And then you look at the path that goes, you know, from the green edge to the root. And you will have an orange segment and a blue segment. And the way this orange color and the blue color works is that when you take into account other big jumps, what you will see is that, you know, you have a segment which is orange and which is disjoint from different paths. And the moment they start to meet each other, they will be painted blue. So the thing is that those orange guys are independent. Yes. Yes. Were you trying to have a question in there? Oops. Let me, there's a question in the chat. No. Yes. What's the question? I don't see a question. I think you can. Maybe it was sent privately, but. I don't see a question in the chat. Okay. I'm, I'm all right. So I just move on. Where was I? Yeah. So the orange guys are basically independent and those blue guys are the bits when you see some kind of dependence. And yeah. So the set of those admissible particles, I mean, those that you see at the very end, right, each of those positions of those guys actually can be the composed in the following fashion. So if you have this, when you, you know, you go along here, you see a blue path, you know, a path in the orange color, then is this one green big jump and then there's the red. All right. So what's the contribution of each color? If you really believe that those big jumps are a sparse subset of your London tree, this means that if you take two green edges, they will be far away from each other on the, on the tree. And this means that, you know, those, those paths will meet very high up. Hence those this blue bit, which will be relatively short. In conclusion, you know, this, this red part is, is negligible. So now let's focus on the, on the red part. And this is something that requires some computation, but since the maximum displacement grows faster than linear, right, this go grows like. And to the power one over R. When you, when you look where those displacement happened, I mean, in which generation, it's more likely that they will happen in a generation which is close to N. So this in terms tells you that those, you know, those red segments, they also be relatively short. So this, this red guy can also be neglected. All right, but this, this tells us that, you know, the, the positions of this admissible particles are asymptotically are just the, you know, the, the orange parts and those, those green jumps, right, and those guys are independent. So we are, we are quite happy. So now we need to compare those, those two. Right. So asymptotically, your right most particle is consisting of, of this big, big jumps, this an X, and the position of, you know, the typical particles. And you know that, you know, this big jump, well, it is big. And those typical particles. This, this is the behavior should be asymptotically normal. So you can do the same optimization game that we do the previously. And, and you would come to a conclusion that the contribution of those typical particles, it, it is also of the order and to the two minus one over R. So this is the exact same exponent that we did get from this optimization. On the other hand, we know what, what is the behavior of the biggest particle, right. So this is, you know, we had the leading term and we know what are the fluctuations. So now we just need to, to compare those two. Right. So if R is small, then, you know, whatever the contribution of the typical particle is particles is it's killed by the fluctuations of the, of the biggest particle. And, you know, this right most particle, it scales the same as the, as the biggest particle. And you get this nice, nice limit. Moreover, this one comment that I would like to make at this point, that is, we can actually show that the difference between the right most particle and the biggest displacement is negligible in compared to the. The order of the fluctuations. Right. So this M and this is something that consists of, you know, the maximum of your guys and this M and this is something that has, you know, the sums. So this, this line somehow really screams that one particle did all the, did all the job. What happens when R is big? Well, you need to go back to our picture, right. And what you do, you first look at the green displacements at the green edges, right. And this fixes the first order, right. And then you know that the contribution of the orange parts should somehow kill the fluctuations of those, those green guys. So what you can do is start looking at those orange paths and see what whatever is the is the biggest. And as I mentioned, you know, in this case, there are actually a lot of those, those orange, those green jumps. Hence, there will be a lot of those orange paths. So we have a very big number of those orange paths with which in turn gives you a more balanced behavior. And now you can basically play again the same optimization game and say that, all right, so this is the contribution of those, those orange guys is in fact of the other. To enter the two minus one over R, right. And if you do this, this optimization, right, you can squeeze out the, the limiting constant. All right. So what happens in the boundary case? Well, you notice that, you know, both of those exponents are equal one half, right. So now if the contribution of the typical, typical particles should be is of this other than you will conclude that they will actually fluctuate. All right. So what, but still, you can work around that right, you can, you know, take a family of independent gaussians. And if you look at the, the right most particle after centering and scaling, you will see, again, a maximum of the, you know, those admissible particles, right. So you see the proper centering of the, of the right most position and then those typical displacement would be, you know, asymptotically gaussian. But remember that, that we did the couple, the whole thing, right. So we can still basically work as a, as if we would, we would be working with a family of, of independent guys, right. So still, you know, centering this, this right most particle is the same as, you know, pardon the cumulative distribution function of the right most particle is the same as, you know, those big displacements with some random gaussian shifts. But still, this is something which is, which is manageable. So you do the exact same computations that we did for the, for the maximum, right. And this is that, you know, this gaussian shift transfers to this integral. And here, you know, this, this big displacement transfers to this, to this e to the minus e, minus e, pardon. And then, well, the gaussian and the variable transfers to this, to this gaussian term, right. But still, if you look at this integral as a function of, of t, this is still a constant times e to the minus minus t. So nothing, nothing really changes. I mean, those fluctuations, they do contribute, but the, the, the contribution can be hidden in a, in a constant. All right. So this, this is that, I mean, in, in the case when we're are is very small, right, the whole, the behavior of the, of the whole system is really governed by one big jump, right. And in this boundary case, this, this one big jump is supplemented by this, this gaussian, gaussian shift. However, this, this contribution can be, can be hidden in this, in this constant gamma. So in particular, this, this gamma, it's, it is not a, this is not a continuous function of, of our parameters, right. And in, if R is big, then, you know, we have a lot of the typical particles that somehow contribute, which gives you a more balanced, balanced behavior. Yeah. So remember that the, the way that we started our analysis is first by, by looking at the, at the biggest displacement. On one hand, it did give us, you know, all the, all the information we needed, or it allowed us to decouple the whole thing. And it, this, this was the, you know, the key step which we needed in our analysis. All right. So this was based on a joint paper with, with Fina and Thomas and it's available on archive. And let me just mention that, you know, this is first order asymptotic was established by, by Nina in the, in the 2000s. And if, and then large deviations for, you know, for the, for the script stretched exponential random variables, go back to the Nagaev in the late 60s. However, it's, it's not an easy thing to, to read. All right. So I'm out of slides. So I guess this is a good place to stop. Thank you. Thank you very much, Piotr. So maybe you can unmute people. And so other questions to Piotr as long as the screen is still shared. Some questions were already answered in the chat. It seems that there are no questions. You are muted currently. Yes. So we will stop. There are no questions right now. So then I think, are you stopped the recording for. Stop now the recording and start it again for some stock.
We will consider a branching process with a spacial component on the real line. After birth each individual performs an independent step according a stretched exponential (or Weibull) law. We will give a detailed description of the asymptotic behaviour of the position of the rightmost particle.
10.5446/55643 (DOI)
Okay, so I'm going to be talking about the extreme or particles of branching Brownian motion. So presumably we've all seen branching Brownian motion before, but just to fix the exact model we're working with, I'm going to be talking about binary branching Brownian motion where particles have unit exponential lifetimes. So particles move around like Brownian motions and after an exponential amount of time they die and they're replaced by two particles that inherit the spatial position of their parent. So as time progresses we have more and more particles in the system and we're going to be most interested in the extreme or particles and the extreme or particles that's an informal definition for the particles that are the highest up in the process at times. Particles that are furthest away from the origin in the positive direction. So this talk is just going to be an overview, I'm not going to prove anything. There'll be centered on three questions we'll ask about these extreme or particles. So the first question we'll ask about the extreme or particles and the most natural one to ask is where are they? So at a large time T roughly where are the particles that are furthest away from the origin in the positive direction? Once we've understood that question we'll move on to another interesting question which is if the particles are certain, if the particles that are furthest away from the origin lie at a certain height, we're interested in how did they get there? And I had to say before I knew the answer to this question I had absolutely no clue what the answer would look like. It's difficult to guess whether this past blue trajectory might look like a straight line until quite recently and then can people see my cursor by the way? Yes. Yes. And it's difficult to guess whether the trajectory of the particle that ends up in the highest position, does it hover around the origin and then suddenly burst up at the very end? What does it look more or less like a straight line? So that's the second question, how did they get there? And the third question we'll ask is are they closely related? So if we pick some of these particles that are among the highest particles at a large time T, is it the case that they are both up there because they had an ancestor that lived very recently and this ancestor got to a really high position and they're both in the midst of this common ancestor that last existed near time T? Or is the opposite true that actually it's a coincidence that they're both so high up and they're totally unrelated? There are just loads of particles in the system and lots of them end up at this high height and it's coincidence. So those are the three questions we'll ask. So I should first have disclaimer that I'm going to import lots of results from branch and random walks and just pretend that the paper was talking about branch and random motion and I won't always be so precise in the statements of the results. And whenever I say T, T is a huge time. Okay, so the first question, we want to work out where the particles, the highest particles are at large time T. And in order to do this or just get a rough grip on the question, we need to first understand how many particles there are at time T. So if NT is the number of particles alive, then NT grows exponentially, at least in terms of expectation. And in fact, if you look at the ratio of NT with its expectation, it behaves really well. So in this first graph here, I've plotted NT against time. And the second graph I've divided NT by this exponential growth. And you'll see that as time goes to infinity, this ratio stabilizes. The height where it does stabilizes is random, but it stabilizes. And actually it turns out that the height at which it stabilizes is exponentially distributed with a mean one. So the take home of this slide is that at a large time T, there really are roughly each of the T particles alive. So we know that they're roughly each of the T particles alive. So we'll be interested in calculating the expected number of particles that lie above a certain height at time T. And thanks to the many to one formula, the expected number of particles above a height y at time T is exactly equal to the total expected number of particles times by the probability that a typical particle ended up above y. So we have this nice formula here. And of course, by since the expectation of the number of particles alive is each of the T, we have this term and this integral corresponds to the probability that a Brownian motion ends up above a height y at time T. So this quantity here is decreasing in y. We want to choose y that's big enough so that this integral gets small and it competes with this huge each of the T term. So we want to choose y big enough so that this quantity has all the roughly one. And it's a calculation you can do that the right choice of y so that this expected number of particles above y has all the one is it's about root two T minus this funny logarithmic correction. If we chose y any smaller than this, this level on average, there'll be tons of particles above y. So here's a diagram here of if FT is this special function, this special expectation frontier of root two T minus the logarithmic correction, then the expected number of particles in a patch above this frontier is tiny. And of course that guarantees that there are no particles in this patch. However, the expected number of particles in a patch below the frontier is huge. So it's quite natural to guess at least at a first glance that if this point is the point where the expectation cuts off, then it might be natural to guess that the highest particles do lie around this root two T minus this logarithmic correction. However, this guess is wrong, and it's not hugely wrong, but it is slightly wrong. So we have this result by Bramston from the 70s. But if you take a huge time and you look where the highest particle is at that time, it actually sits at a slightly lower height than we thought it would. So reality doesn't live up to expectations here. Now we have this higher frontier where the expectation cuts off. But in reality, the highest particles sit a lot lower. So this presents us with something that looks a tiny bit like a paradox in that we have these two different frontiers, and we know in reality the highest particle usually sits near this lower frontier, the root two T minus this three over two logarithmic correction. But based on our previous calculation, using the many to one formula, there are on average loads of particles in this patch below the expectation frontier. Now, this seems a bit like a paradox, but of course, when you think about it, it's not if an eccentric billionaire says, I'm going to give you a one in 500 chance to win a billion dollars, then both of the following statements are true. You're on average going to win loads of money, and you're not going to win any money very often. So that's what's going on here. On average, there are tons of particles here, but usually there are no particles here. It's a bit strange. And we'll get a lot better understanding of how this happens in the next result by Hoonchi. So the Hoonchi result, whereas Bramson's result looked at a large fixed time and said that usually the particles like a certain height, Hoonchi's result tracks the height of maximum particle as time goes to infinity. And they find that asymptotically, this ratio here fluctuates between two levels. So interestingly, the maximum particle is never below the frontier that was included in Bramson's result, and it's never above the frontier that was given by the expectation calculation. Now this result does account for this weird thing we had before with the ghost particles, where there were loads of particles on average in a patch, but no particles in a patch. Because if we look at this simulation of a branch and Brownie motion, and we notice that the maximum particle got up to this, it was one of those rare occasions where the maximum particle got up to this higher frontier. Only after that period, it has loads of children or descendants in this patch. And for a brief period in time, this patch between these two frontiers is filled with particles. So we have these sort of hidden events that happen very occasionally, where very occasionally you have tons of particles in this gap between the two frontiers. So there's a paper by Matt Roberts, which you can look at, which has quick proofs of both Bramson's result and who had chosen the result. So we've answered the first question. The highest particles at time t usually sit at a height around root 2t minus this logarithmic correction. Occasionally they're a bit higher. So we now look at question two, which is how do these high particles get there? So we know that the highest particle at a large time t usually sits around this height root 2t minus 3 over 2 root 2 log t. Now if we trace its path, the path it took back to get there, we get this sort of random trajectory. Now since every particle in the system itself behaves like a Brownian motion, one natural guess for how it got there, well a first guess that presumably all of us would think of as, well maybe it just looked, this path just looks like a Brownian motion conditioned to end up at a height root 2t minus 3 over 2 root 2 log t. So here's the natural first guess. Now I've been very careful to highlight that this guess is wrong. And let's have a think about why it's wrong. If this were the case, then this process described a Brownian motion conditioned to end up at this height would occasionally go above the flat line root 2t and the order of these fluctuations would be of the size square root of t. But we've already seen in who and she's result that no one ever ever goes above this root 2t minus 1 over 2 root 2 log t question. So large t at least. So this natural first guess can't be exactly right because if this were true, the particle occasionally go too high and we've already stipulated that no particle would never go that high. So a better guess is the trajectory of a particle, the trajectory of the highest particle looks a bit like a Brownian motion conditioned to end up at that height. But in some sense it respects the frontier. It respects the who and she result that no particle ever gets too high. Of course I say it's very easy with hindsight to come up with a better guess what result is when you know what the result is. But as I said, I don't think it's easy to guess what the trajectory of this particle is. So indeed this better guess we have here is correct. And we have this wonderful result by Chen which was proven the upsetting of branching random walks and I've given a bit of an informal statement here. But Chen's results says that if you look at the trajectory of the highest particle, it looks like a Brownian motion conditioned to end up at that really high height, but also condition also conditioned to never go above this straight line. So the formal way of the rigorous rate of statements involves rescaling of this blue line looks like a Brownian excursion. So we've answered question two. We now understand where the highest particles are, which is question one and how they got there which was question two. So now we move on to looking at the relationships between different high particles. So again, if we pick the two highest particles in the system, a large time team, were they likely related very recently and they had a common ancestor who had a very high position near time team? Or is it a coincidence that they're both there and actually the common ancestors they share were right at the beginning of the time period? Or could it be the case that something in between is true? They were last related somewhere in the middle of the time period zero to two. And before tackling this question, it's worth mentioning that actually if we pick not the two highest particles, but just two typical particles, then it's not obvious when two typical particles were last related. So first let's look at that this and have this result by Harris Johnson Roberts that if you pick two typical particles, so you have that time T, you have each of the two particles, a huge number of particles. And if you pick two particles uniformly from the entire population, you don't care about space and you get high T uni to note the time of their most recent common ancestor, then it turns out that as this time T goes to infinity, high T uni surprisingly stays very close to zero. So the vast majority of particles are distantly related, the vast majority of typical particles at least. And actually we have an explicit formula for the limit of this random variable, high T uni. It converges in distribution to a random variable with this strange distribution here. And if you stare at it for long enough, you can check that everything's okay when you plug in s equals zero. I often look at it and get a bit scared. So we now understand how closely related typical particles are. We return to our original question which we're interested in, which is how closely related are extreme particles. So one way for setting up this framework is to look at a Gibbs measure. So you choose particles randomly from the population, you choose two different particles, but your sampling is going to be biased to each of the beta times the height of the particle. So when beta is positive, clearly you prefer picking particles that are higher up. And as beta gets big, it becomes more and more important to you that you're picking particles that are higher from the system. So we have this very surprising result, I think, by a few, it was formulated in the 80s by Derrida and Spahn and proved rigorously by Bovier, Kofrin and Nauti's that if beta is sufficiently big, then when you pick two particles according to this, each of the beta sampling, then these particles were likely either related right at the beginning or right at the end with high probability. And strangely, asymptotably, both of these events do happen with positive probability. So just to emphasise, it's really unlikely that these two particles were related, last related at some point in the middle of the interval. It was either 01 from the start or 01 from the end. And actually there are loads of really interesting papers with this sort of Gibbs sample. So if you let P beta be the probability that they were related early and 1 minus P beta be the probability that they were related late, then there's a paper by Chen Madhul Malen, which calculates this probability P beta explicitly. It's given in terms of a Bason Dirichlet distribution. And strangely, when P beta is decreasing in beta, so as beta decreases, sorry, this should say they're increasing in beta. No, sorry, as beta, it's correct. As beta decreases, you get less and less likely to pick particles that are higher than the system. And strangely, when beta is less than 1, P beta is equal to 1. So when beta is less than 1, you do tend to want to pick particles that are higher than the system. But the force of how much you want to pick higher particles isn't strong enough to really see any of this late coalescence. And actually you find that when beta is less than 1, all particles sampled, all pairs of particles sampled according to this Gibbs sample with parameter beta, they're very likely to its last being related to the start. There are numerous interesting papers in this Dirichlet, Agwin, Bovier, Kisla, Madhul, and Pan. Pan has a result which is sort of analogous to the Chen result we saw earlier, where if beta is less than 1 and you pick a particle that's reasonably high up with this e to the beta sampling, then it's a normalization of its trajectory looks like a Brownian excursion, sorry, a Brownian meander, which is sort of the analog to Chen's Brownian excursion. So that completes the answer to our third question. So just as a recap, we've asked three questions about these highest particles. We found that first of all the highest particles usually sit near this height with the strange logarithmic correction. And there's this whole story about how the expectation sort of lies to you when you're doing these calculations about where the highest particles are. And there's an explanation to this mysterious phenomenon thanks to this Hu and Shi result, which states that the highest particles do occasionally very rarely get a bit higher. And then second, we looked at the past trajectories of these highest particles and found that they did look like a Brownian motion condition to end up at this really high height, but there was this repulsion. People in the area are often called entropic repulsion, this mysterious force which is keeping them away from the frontier. And finally, when we looked at how closely related these are, extremal particles are, we found that it's very likely that they were even related in the very beginning of the process or near the very end of the process. So that concludes the talk. Thank you for listening. Thank you very much. Can you unmute for me? I should have introduced Sam. So Sam is a postdoc and this was on the slides right now. You're in Graz and are you really back to Graz right now? I'm in London right now. Are you still in London? Yeah. Be where you're officially affiliated. Sorry Nina. It's a trap!
We study the positions of extremal particles in branching Brownian motion, with a particular emphasis on understanding why moment calculations can be misleading. We then turn to looking at the past trajectories of these particles, as well as their genealogical relationship with other particles in the system.
10.5446/55658 (DOI)
Okay, great. So thanks a lot. So this is my final lecture and today I'm going to talk about the third point of the overview which was refined mix-hit relations. Mixing. So first I'm going to set up some notation and recall the definition of the relaxation time. And once I do that, I'll follow the paper by Basou Harmon and Paris. So this lecture will be based on this paper, Basou Harmon and Paris, which is called characterization of Cato for reversible Markov chains. So probably Jonathan would be the most appropriate person to give this talk, but since it's in my mini course, I'll try to explain, but he will be there to answer questions in the chat. Okay, so let me first set up some notation. So on the website I posted, I just wrote a one-page note where I recalled the spectral theorem, the spectral representation theorem for reversible matrices. So let me first define the inner product that one uses. So if we have a finite state space and we have S and we have two functions f and g from S into R, then their inner product with respect to the invariant distribution pi is defined to be the sum of all x in S of f of x, g of x, pi of x. And now the spectral representation theorem tells us that if P is reversible with respect to pi, which is the invariant distribution, then there exists lambda j, j from one up to the size of S eigenvalues with corresponding eigenvectors fj, j from one up to the size of S. And lambda one can be taken to be one and f one can be taken to be all ones. Okay, and then one can express P t x y divided by pi of y as one. This corresponds to lambda one equal to one plus the sum from j from two up to the size of S of lambda j to the t times fj of x times fj of y. Okay, so the proof of this theorem, I'm not going to prove it, it's in the notes, but I'll just write that the idea is to use the matrix A x y, which is defined to be square root of pi x divided by square root of pi y times p x y and check that this is a symmetric matrix. And so one can go and analyze the matrix A and then one can pass back to the matrix P to obtain the eigenvectors of the matrix P by multiplying with a correct matrix the eigenvectors of the matrix A. Okay, so okay, so if we have a reversible Markov chain, then so if P is reversible, we write the eigenvalues in the crazy quarter. One equal to lambda one is strictly greater to lambda two because the invariant distribution is unique greater than lambda n, which is greater than or equal to minus one. And now I'm going to define, define lambda star to be the maximum of the absolute value of lambda, where lambda is an eigenvalue of P with lambda not equal to one. So this is what lambda star is we take the maximum absolute value of all eigenvalues, which are not one. And we also define gamma star to be one minus lambda star. And this is called the absolute spectral gap. It's called absolute as opposed to gamma equal to one minus lambda two, which is called the spectral gap. Now one exercise that I said is that check that if a chain is lazy, then actually all eigenvalues are non negative. And so gamma star is equal to gamma. Okay, so for lazy change, gamma star is equal to one. And final definition is that the relaxation time T rel is defined to be one over gamma star. This is relaxation time. So today, I'll be talking about reversible A, so I'll be talking about, yeah, of course, reversible, but also lazy Markov chains. So T rel is one over gamma star or one over gamma, whatever you like. So the two are equal. And final piece of notation, if F is a function from S to R, we write E pi of F for the expectation of F under pi, so just the sum over all x, F of x, pi of x, and variance with respect to pi of F is defined to be the expectation of the recent F. So F minus its expectation squared. Okay, and then I said one exercise, which is to show the Poincare inequality, which is that the variance with respect to pi of P T of F is upper bounded by E to the minus two T over the relaxation time times the variance pi of F, where P is reversible, lazy on a finite Markov on a finite state space. And this is true for all functions F and all T greater than or equal to zero. So what this inequality says, this inequality here is that when if you wait time T, which is a multiple of the relaxation time, then this is just going to give you a constant. And so this means that after a constant number of T rel steps, the variance of P T of F drops to any accuracy you like. And when I write P of F, I think of P acting as an operator acting on functions. So P F of x is the sum over all y of P x y F of y. Okay, so, okay, great. Okay, so this is all the notation that I want to set up. So all I want you to remember is just the definition of the relaxation time, which is one of the spectral gap here. And we are also going to meet the Poincare inequality. And I said this as an exercise and all it requires is just the spectral representation of P to the T. Okay, so now I'll come back to the paper of Jonathan with Rie de Basso and Valperius. Before I state the theorem, let me set up some notation. So in the first lectures, in the first lecture, I showed that the mixing time of a reversible macro chain is up to end lazy is up to constants, the maximum hitting time of large sets. But what we showed is, as I said already, is that this equivalence is up to constants. And in our theorem, actually, we didn't have good control on the constants. Now, in Jonathan's result, what they obtain a refined relationship between these two quantities, and the constants actually are equal to one. So before I state the theorem, let me define the quantity, which is the analogous quantity to the th of alpha that I was talking about before, which was the maximum hitting time of sets of size at least alpha. So define for epsilon and alpha, which are in zero one, hit alpha of epsilon to be the first T, so that when we take the maximum over all starting states and over all sets A of measure, at least alpha, the probability of hitting K after time T is less than or equal to epsilon. So we want to the probability. So what we're saying here is that if we have a set A, which is large, the probability that we hit a after time T is going to be smaller than epsilon. And again, oh, sorry, I forgot to write BX here. Again, here we take the worst starting state X. So theorem. So let me just write BHB is that for let B be a reversible lazy let X. Markov chain on S, which is finite with B and pi. Then for all epsilon in zero one, T mix of two epsilon is upper bounded by heat of one minus epsilon epsilon plus two times the relaxation time times log of two over epsilon cubed. And T mix one minus epsilon is upper bounded by heat of one minus epsilon of one minus two epsilon plus two T rel times the logarithm of eight over epsilon cubed. I don't think the beeping noises coming from me. I can also hear it. So I mean analogous quantity here instead of looking at each of which is the maximum expected heating time of a large set here we're looking at the tail of the heating time. So it's a more refined result also in terms of that we are not looking at expectations, but we're looking at days of heating times. But I called it analogous because we relate the mixing time to this heat quantity. So that's one theorem. Actually, that's not the most general result from the paper. But the reason I chose to they talk about this result in the introduction and the more red the more general result instead of taking two epsilon here one can take epsilon plus delta and get some more precise some more precise bombs here with a different constant. But the idea is exactly the same. So it will be easier for presentation purposes to just stick with this. So before so actually, I'm hopefully going to present the whole proof of this result today. So let me call this star and this double star. I will leave double star as an exercise, but I will prove star and this is the hard direction of their paper because they also obtain a lower bound of similar order. So I'm not going to write the lower bound. It's one can find it in their paper and I will not have time to prove it. But the lower bound always in this kind of results, it's always easier because it usually uses the idea that if you haven't hit a big set, then you can't have mixed. So the lower bound again involves this hit quantity minus some multiple of the relaxation time. So before I jump into the proof, I would like to write one remark. So what we see from this statement is that the mixing time is upper bound by these two quantities. So actually the mixing happens in two stages. So in the first stage is governed by this term and the second one is governed by the relaxation time. Laziness is not used in the proof. That's right. I guess laziness is used right because one can just put here the relaxation time. But okay, let's just stick with lazy change for now. But I agree, the relaxation time one can just define terms of the... So yeah, it doesn't matter. It just that I only stated one carrying a quality for a lazy change just because all of the eigenvalues are upper bound by the second one. But yeah, laziness is not a sanction. So mixing happens in two stages. In the first one, governed by the hit quantity, and we have to wait to escape some small set with high probability. So when I say high probability, I mean probability one minus epsilon. And in the second stage, we wait for the relaxation time steps. And the second stage, we are going to prove for the second state, we are going to use the Poincare inequality for the second stage. Okay, so I guess I'll just start the proof now. Oh, yeah. So as I said, I'm just going to prove of star. And this one is left as an exercise. It's exactly analogous to what I will present. Okay, so what do we want to prove here? We want to show that if we wait for this amount of time, then we are mixed and the total variation distance is at most two epsilon. So let me set t to be heat of one minus epsilon epsilon and s to be the second time, two relaxation times log of two over epsilon cubed. So we want to show that for all x and all sets a, we have that pt plus s xa minus pi of a is less than or equal to two epsilon. So if we show that for any starting state x and for any set a, the distance between p plus s xa and pi is put up star. I'm just looking at the chart. Yeah, I guess we can just take epsilon to be at most a half. Yeah, of course, that's right. And so yeah, thanks so much. Okay, so what, so if we show what I wrote over here, then we will be done for any starting state x and any set a, if we have this inequality, then that's exactly what we are looking for. So what is the idea? Like before, the idea is that we want to define an intermediate set that I'm going to call g. So I'm going to define intermediate set g. And what we want is the first way to hit g. And then once we hit g, we want to have some good control on the probability of being in a after t plus s steps. So idea want to define an intermediate set g such that we hit it with high probability before time t. When I say high probability, again, here high probability all this means one minus epsilon or something like that before time t and conditional on hitting it. By time t, we want to be close to pi of a at time t plus s by at most epsilon. What do I mean? I mean that the probability starting from x, that x t plus s is in a, given that first hitting time of g happened before time t minus pi of a, I want this to be less than or equal to epsilon. Now, if the probability that we hit g by time t is large, so the probability that we don't hit it is less than epsilon, then we will be done. Because the probability starting from, so let me now write it in blue, because I'm going back to the proof. So let's just write so that we see what we need to define g to be. The probability that starting from x, that x t plus s is in a minus pi of a, I'm going to write the first probability as x t plus s is in a, given tau g less than or equal to t times the probability that tau g is less than or equal to t plus the probability x t plus s is in a, given tau g greater than t times the probability tau g greater than t minus pi of a. So this whole thing, let me just write this down here. So this whole thing is upper bounded by the probability starting from x, that x t plus s is in a, given tau g less than or equal to t minus pi of a plus the probability starting from x, that tau g is greater than t. So I upper bounded the problem, this difference here by the sum of this absolute value plus the probability for the hitting time of g. So if we have a set g for which we can show that this absolute value is upper bounded by epsilon and for which we can also control the tail of g, then we will be done. So let me, we can control it by epsilon, so that this tail probably is upper bound by epsilon, then we'll be done because we'll show that this difference is at most to epsilon, which is what we wanted. So let me upper bound this now by the maximum over all y in g, actually let me write supremum to be correct, y in g and r greater than or equal to s of the probability starting from y, that xr is in a minus pi of a plus the probability starting from x, that tau g is greater than t. So what I did so far is I started, I said that we want to show this inequality, so I said we want to find a set g, we don't know what g is yet, but we want to arrive at the definition of g. So I said let's suppose we have some set g, then we can always upper bound this absolute value by the sum that I wrote down here. And now we want to find the right g so that this is less than epsilon and this is less than epsilon and then we'll be done. So let's define g to be what we want it to be and let's hope that, so by what we hope to be, I mean that we can control the first term automatically by the definition of g and then let's hope that the second term will also be controlled. So define g to be all points y for which, let me write it, the sup over all r greater than or equal to s of the probability starting from y, that xr is in a minus pi of a is less than or equal to epsilon. Then we get that this is going to be upper bounded by epsilon plus the probability tau g greater than t. So remains to show that px tau g greater than t is less than epsilon. Let me write suffice it to show. So once we have that, then we are obviously done. Now, so I'm going to take a break in one or two minutes. So now let's go, so what I've done so far is I just defined the set to here what we want it to be so that we can control this term. And now, so we forget about this term and now we just focus on the second term. So now I want to show that the probability that we haven't hit g by time t is upper bounded by epsilon. So far, I haven't used anywhere what t and s are. So I'm going to use s a little later, but let's now look at t. So t is hit one minus epsilon epsilon. And if we go back to the definition, it means that if a is a set of size at least one minus epsilon, then the probability we haven't hit it by time t is at most epsilon. So suffice just to show that this probability is less than epsilon. And again, now I'll write it suffices to prove that pi of g is greater than one minus epsilon since t is equal to heat of one minus epsilon epsilon. So now we prove some need to show that pi of g is greater than one minus epsilon. So this is going to be the rest of the proof now. So I think it's a good place to take a two minute break now. Thank you. So feel free to ask any questions in the chat. Yeah. So let me know where to stop. Okay. Um, I guess there is not enough time now. So maybe at the end of the lecture, what would be better? Okay. Yes, we can. There are significant questions about earlier lectures. We can discuss them afterwards. So the forum, the Zulip forum is also available and remains active also after the course ends. So you can also have any discussions about the lectures or the exercises there. So Georgios is asking about the name relaxation time. I think Jonathan is the right person to answer. Yeah, I'm not sure about the history of that. But it's related to the fact that when the chain is already at equilibrium or close to equilibrium, it kind of measures the time frame in which observable becomes a decorrelated, the worst case time window for this to happen. Yeah, I'm not sure who came up with this terminology, but it's used in David Aldo's work and in his book with James Spill. Yes. So it is because asking whether today's result implies the upper bound from the first lecture. And that is asking about explicit comparison between the heating time and its refined version. Yes, so right. So I want to mention what Jonathan has mentioned here that the relaxation time is always a lower bound for the mixing time. Because this, yeah, I should have said that. So the relaxation time is always a lower bound for the mix. So the second term here, oops, sorry. So this second term can be ignored when we only care up to Constance. Tingjou is asking about one of the inequalities from the proof, the first inequality. Yeah, okay, sorry. So there is one step I omitted. So pi of a here, you can write it as pi of a times the probability that tau g is greater than the t plus the probability that tau g is less than or equal to t. And so then you apply the triangle inequality and you get exactly what I wrote. You upper bound here, I forgot about the probability that tau g is less than or equal to t. I'm not multiplying because it's upper bounded by one. And here I upper bound by one the absolute value of the difference between the two. So you just need to write pi of a times one, you need to replace the one by the sum of the probability. And then he's asking about the second inequality with the soup. Right, so here I condition that we have hit g by time t. So then we want to be may after t plus s steps. So at least after s steps, we want to be may. So I just take the worst, the worst starting state in g. So I applied the strong mark of property here at the first heating time of g. Perhaps we can continue. Okay, yeah. Okay, very good. So, okay, so we're okay. So, okay, great. So we are all on the same page now. So we want to show that this is upper bound by two epsilon. We've reduced it to showing that the tail of the heating time of g is the probability that it's greater than t is less than epsilon. And now we reduce these to showing that the stationary measure of g is at least one minus epsilon. So let's go. So the remainder of the probe will be focused on showing that pi of g is greater than one minus epsilon. So now let's go back to the definition of g here. So g is defined as all the starting points for which the probability of being may after any are greater than or equal to s minus pi of a is smaller than epsilon. So suppose that we were to take r to be equal to s here. So suppose that we didn't have the soup, we just had r equal to s. Then if we look at the definition of s, s is the right multiple of the relaxation time. This log is going to, so let's forget about the log. Let's just think of it as a, I mean, it just a multiple which only depends on epsilon. So if here, if here we forget about the supremum and we only look at r equal to s, then it would be easy to show that pi of g is large just by using Poincare inequality. And I'm going to explain that in the more general setting that we see now. But I just want to say that if r was equal to s, then just the Poincare inequality would be enough and we would be able to show that pi of g is greater than one minus epsilon. Here we want to take the soup over all times. So when you have control, so for r equal to s, we would have good control of this. But now we want to take a, we want to take a soup. So usually when you want to control a maximum of things and you only have control of one thing, this rings a bell and it looks like one would want to use something like dopes inequality. So in this setup, there is a theorem that is extremely useful and it actually solves exactly this issue of taking the soup over all r and this is called star's maximum inequality. And I'm going to mention it and prove it today. But first let me introduce a notation and so we arrive at the statement of star's. So first of all, if we have a function f from s to r, define f star of x to be the supremum over all k greater than or equal to zero of p2k f of x. So again here, I'm thinking of the matrix p as an operator acting on functions. And here I define f star of x to be the soup over all k greater than or equal to zero of p to the 2k f of x. Now I want to define, so here we want to look at this difference. So I'm going to define the function f t of x to be p t x a minus pi of a which can also be written as the matrix p to the t applied to the indicator of the set a minus pi of a and all of this applied to the pondex. Okay, so this is my function f p, so what is f0? f0 is simply this function, the indicator of a recenter with respect to pi recenter. And now f t star of x is going to be the soup over all k greater than or equal to zero of p to the 2k f t of x, which is by the definition the soup over all k greater than or equal to zero of p to the 2k plus t x comma a minus pi of a because I just multiply this matrix by p to the 2k. And I'm also going to take p of f t star of x, so the function p of f p and now take the maximal function of this which is the soup over all k greater than or equal to zero of p to the 2k of p of f t of x which is the soup over all k greater than or equal to zero of p to the 2k plus t plus 1 of x a minus pi of a. So it's not clear now why I'm defining all of these quantities but if you look at the set g the set of all y so that this soup is less than epsilon then g is exactly the set of all y's for which f star of y and p f star of y are less than or equal to epsilon. So I defined these functions here f star so for any function we define f star by taking the soup over all k greater than or equal to zero p to the 2k and now I define f t to be the function that I would like to use here so just the matrix p to the t applied to this indicator is centered and now f f t star and p f t star become exactly this absolute value sorry the supreme overall r because we have to take care of both even and odd times. So now I just rephrased the definition of g in this way instead in terms of the the star functions and we want to show that pi of g is greater than one minus epsilon which is equivalent to showing that pi of g complement is less than epsilon but pi of g complement is upper bounded by pi of y so that f star of y is greater than epsilon plus pi of y so that p f star of y is greater than epsilon. And now we can use so these functions are all positive so we can just take the star functions are all positive so we can take them the square of both sides of these inequalities and so this is going to be upper bounded by e pi of f star squared divided by epsilon squared plus e pi of p f star squared divided by epsilon squared where here I used Markov's inequality. So now we reduce the problem to controlling the second moment of this maximal function f star. So now I said before that if g were defined without the supremum so just for r equal to s we would be able to use Poincare and finish the proof. So let's do this now so I'm going to write it so for a function f because we'll use it in a second for we write for p in one infinity fp to the power p is going to be the expectation under pi of f absolute value of f to the p and the expectation is with respect to the measure pi so now f star the second norm of this is sorry I'm sorry I don't want to take the star I just want to take actually let me take also p of f star so that I control both of them pfs sorry without the star so pfs the second norm is upper bounded by the norm of fs because p is a contraction and now f star is so sorry fs the norm of fs fs is defined to be p to the power s of apply to the indicator of a minus pi of a so this is exactly the variance under pi of p to the s applied to the indicator of a and now if we apply Poincare this is going to be upper bound by e to the minus 2s over the relaxation time times the variance under pi of the indicator of a and now we can just substitute what s was so this is going to become epsilon cubed divided by 2 times pi of a times 1 minus pi of a for the variance of so here Poincare and here substituting the value of s and the pi of a times 1 minus pi of a is the variance of the indicator so this whole thing is upper bounded by epsilon cubed divided by 8 okay so so what we showed is that the two norm of both pfs and of fs without the stars they're both upper bounded by epsilon cubed of rate and this is just if we were to define this would be useful if we were to define g just a time s so then by Poincare we would be done because we showed that this is upper bounded by epsilon cubed of rate we would substitute here and we'll get that pi of 2 complement is quite small it is epsilon over 4 now we want to get we want to have some control on the norm but with a star and this is where now we are going to use star's maximum equality so theorem so this the setting is as before so p is reversible with respect to pi p is in one infinity and for any f or any f from s to r we have that f star the p norm of f star is upper bounded by p over p minus 1 times the p norm of f and f star is like i defined earlier so this is reminiscent of dupes maximum equality lp maximum inequality this p over p minus 1 is exactly the same constant that appears there and i'm going to present the proof and you will see that indeed the p over p minus 1 comes from dupes maximum equality so the idea of the proof is to define a suitable martingale to which one can apply dupes inequality okay now armed with stars which will prove in second let's go back to finish the proof so p f s star the two norm that's what we need over here is going to be upper bounded by actually we want the square of these so it's going to be upper bounded so here we take p equal to 2 so it's going to become 4 times p f s the second norm and we already said that this is upper bounded by epsilon cubed over 8 so this becomes epsilon cubed over 2 and applying it only also to f star second norm is going to upper bounded by 4 times f s second norm and again using the bound from back this is upper bound by epsilon cubed over 2 so if we go back to let me write that as 3 stars so plug into 3 stars to get the pi of j complement is upper bounded by epsilon over 2 plus epsilon over 2 which is equal to epsilon and this finishes the proof for the upper bound of pi of j complement which is exactly what we want to show so modulo stars maximum equality that I haven't proved yet we showed that pi of j has a large measure and this completes the proof of the theorem so what remains is to prove stars in equality so proof of star okay so let me just recall the definition of f f star of x is defined as the soup overall k greater than or equal to zero of p to the 2k f of x actual pi of this so we want to show as I said before this looks like doob's maximum equality so we want to find the right martin k so let x and now let x be a mark of chain with x zero distributed according to pi now if we look at p to the 2n and apply to x zero so what we want is to take the soup over all n of so we want to control the p norm of f star which means we want to control the p norm of p which is the same under a the p norm is always defined with respect to the invariant distribution pi so it's the same as saying that we want to control the soup over all n greater than or equal to zero of the absolute value of this and we want to control the p norm of that if x zero is distributed according to pi so what is p to n f of x zero it's the expectation of f of x to n given x zero just by definition and now i'm going to apply the tower property so i'm going to condition on some more things so it's the expectation of f x to n given x n and x zero given x zero so i condition both on x n and x zero so tower property now because of the mark of property the expectation inside is going to become we can forget about x zero so it's the expectation of f of x to n given x n and all of this given x zero by the mark of property okay so now let's set r n to be this conditional expectation f of x to n given x n now what the goal is we want to show so p to n f of x zero is the expectation of r n given x zero so what the goal is show that r is a backwards marking area in other words we are going to show that if n is fixed then if we look at r capital n minus little n for n between zero and capital n then this is a true marking area so once we have that this marking area then we can apply dubbs maximum equality to this marking area and obtain an upper bound for the lp norm of the supremum of this r in terms of in terms of the p norm of f and once we do that then we can pass back to this quantity using conditional Jensen and there is one final step where we have to send this capital n to infinity but this will just follow from one convergence okay so the first step is to show that this is a marking area so it's important that we start at x zero according to pi and the train is reversible since x zero is according to pi and x is reversible it follows that x n x n plus one up to x2n has the same distribution as x n x n minus one x zero so rn which is the expectation of f of x2n given xn is equal to the expectation of f of x0 given xn because these two sequences have the same these two sequences here have the same distribution by reversibility and by the fact that x zero is according to pi and now using the Markov property we can write this expectation as expectation of f of x0 and I'm going to condition on the whole future after time n and this doesn't change the expectation just by the Markov property so if we now set fn to be sigma of xn xn plus one and so on then and fix n then this last inequality here this last equality gives us that rn minus n is a marking area with respect to the filtration fn okay so this is good news because we want to control the maximum of r so if we take the maximum of rn and between little n and n between zero and capital n the pinnum of this of course is the same as if we take r at time n minus n and now we apply dubbs lp inequality to obtain that this is upper bounded by p over p minus one times the pinnum of r0 but what is r0? r0 is simply f of x0 which is p over p minus one times the pinnum of f so what we showed is that the pinnum of rn is of the maximum of rn is upper bounded by what is written in the statement of the theorem by this quantity so now but that's not what we want to bound what we want to bound was the supremum overall n greater than or equal to zero of p to the 2n f of n actually let me just write it in a different way so let me okay it doesn't matter I'll just write the maximum and less than or equal to n f of x0 this is equal to the maximum over all n of the expectation expectation of rn given x0 and this is upper bounded by the expectation of the maximum of rn given x0 and now conditional tension implies that the max of p to the 2n f of x0 the pinnum of this is upper bounded by the pinnum of the maximum of rn and we already showed that this is upper bounded by p over p minus one times the f0 the pinnum of f and now letting capital n go to infinity and using monotone convergence a complete set proof so just to quickly go over the proof again so the goal was to find a martingale to which we could apply dubbs maximal inequality the martingale that we found so what we did is we expressed p to the 2n f of x0 as the expectation of rn and rn is a buck was martingale so then we were able to apply dubbs maximal inequality to this martingale r and get the bound that we wanted and then the last step was okay one needs to use conditional tension because this is defined in terms of the conditional expectation of rn and then we just let n go to infinity and we use monotone convergence and this was the result which is a very powerful result because it relates the supremum over so where was f f star defined so for a function f we define f star to be the soup over all k greater than or equal to zero of the expectation of f of x at time to k when you start from little x and one can control the pinnum for any p of this in terms of the pinnum of the function f in the way around here so i think i'm going to stop here okay thank you perla we will have some mute everyone to thank perla
Mixing times for Markov chains is an active area of research in modern probability and it lies at the interface of mathematics, statistical physics and theoretical computer science. The mixing time of a Markov chain is defined to be the time it takes to come close to equilibrium. There is a variety of techniques used to estimate mixing times, coming from probability, representation theory and spectral theory. In this mini course I will focus on probabilistic techniques and in particular, I will present some recent results (see references below) on connections between mixing times and hitting times of large sets. Prerequisites: It would be helpful to be familiar with Chapters 4(mixing definitions) and 12(spectral methods) from the book Mixing Times for Markov Chains by D. Levin, Y. Peres and E. Wilmer.
10.5446/55660 (DOI)
Welcome everybody. This is the second lecture of the Franklin Hollander course on Metastability. Just a reminder that the third lecture will take place on Thursday using the same link. So just as yesterday, there will be approximately two 30-minute tabs and in between we will have an opportunity to ask questions. You should feel free to use the chat to ask questions, answer questions. And again, Elena Puferente is here and she'll be able to assist in answering questions in real time on the chat. A reminder that this lecture is being recorded and being live streamed on YouTube and through BERS. And so if you do not want to be seen or heard on the recording, you should keep your audio and video off during that period. At the end, we'll stop recording and there'll be an opportunity to ask Frank further questions. Just a reminder that the ZULIP platform is there for discussions about the course and the exercises. After, there were some updated exercises and lecture slides which I posted on ZULIP this morning and Sarai has posted the slides for today's lectures and the exercises on the chat so you can have them got them and want to download them. You can. Okay. Frank's second lecture on metastability will focus on Kawasaki dynamics and so Frank, I'll turn it over to you if you share your slides. I will share my screen with you now and I'll open my file in full screen mode. Okay. Everything visible? It is. Okay. Okay. So welcome back everybody. Yesterday, Eiden and I gave you a quick panorama of the area of metastability for interacting particle systems and we started talking a bit about background, motivation and a bit of history and then we started to talk about mathematical setting which involved a configuration space, a Hamiltonian, a reference measure and these three things together built an equilibrium measure that describes the system in equilibrium. And then we supplemented that with a Markovian dynamics again in very general terms, not specifying any details yet that captures the non-equilibrium behavior that lies at the heart of metastability because metastability is a non-equilibrium behavior. It's about transitions between phases. And a very key thing was a formula that links the metastable crossover times that we're interested in to some basic tools from potential theory, predominantly harmonic functions and capacities. And I tried to explain that this is that they play a major role when you really want to compute things about metastability. And Eiden and I also recalled two complementary variational principles for capacity, the Dirichlet principle and the Thomson principle that turned out to be very powerful computational tools. And I was trying to explain why these tools are there and how we can use them in order to really get our hands at the key metastability quantities. But yesterday, everything was sort of quite general. I didn't talk about any specific models and it is now time to become more concrete and to see how these tools and these ideas and these principles can be used and can be made to work in specific settings. And that's what this lecture and the next two lectures are going to be about. So today we're going to focus on Kawasaki dynamics on lattices. And in particular, we're going to think of a large finite block in either the square lattice or the cubic lattice. And we're going to imagine particles hopping on the vertices of those lattices. And they have some sort of attractive interaction that you will see later on that makes them want to glue together to form a droplet and to condense into a liquid. And we will be addressing the question today, how do the particles actually form a droplet on their way from being single dilute particles representing a gas to a really filled lattice, which we think of as a liquid. And we're going to ask questions like how long does it take the particles to do so and what are the relevant parameters in your model that correspond to the behavior of metastability. So that is what we are going to talk about today. So sorry about that. Yeah. So we're going to, so the target for this lecture is to analyze metastable behavior for a lattice gas. And we're going to make this lattice gas evolve according to what is called Kawasaki dynamics. And I will explain this, I will define it as we go along. And in simple words, what we will be dealing with is particles living in a finite box. These particles can hop around between nearest neighbor sites. So that's the motion. They will feel an attractive interaction when they sit next to each other. So they like to sit next to each other. There's a kind of Van der Waals attractive force. And we're going to imagine that these particles can be created and annihilated at the boundary of the box. And we're doing that because we want to think of this box as actually living in some big gas reservoir. And so when particles enter the box from the gas reservoir, we are going to think of them as being created and when they leave as being annihilated. And we do that in order to bring the problem down to a finite problem. That's nicer mathematically to deal with rather than having to be involved with this entire infinite gas reservoir. So one can do these things too, but that's not what we're going to talk about today. And the question will be, how does the system nucleates? So if I start from an empty box, how long is it going to take before the box is completely full? And what is the system going to do in order to make that happen? So I've listed here five names of people who have been deeply involved in understanding and working on Kawasaki dynamics. And Enzo Livieri and Elizabeth Ascoppola from Rome started with this. They were the first to really look at this in detail. And then Anton Beauvier and Francesca Nardi and I sort of joined later and it became a really nice adventure with lots of interesting mathematics going on. And we have been thinking and working on these problems and versions of this in the past couple of years. Now Kawasaki dynamics is a dynamics in which particles hop around. And this means that because particles are conserved when they are inside the box, they can be created and annihilated at the boundary of the box, but inside the box they're conserved. It means that we're dealing with a conservative dynamics. And conservative dynamics typically in statistical physics are a bit harder to deal with than non-conservative dynamics. For instance, if you have easing spins flipping up and down, there's no conservation of the number of plus spins or minus spins, but for particles there is. And that usually causes some non-locality in the problem. And that non-locality will turn out to be really interesting here. In particular, when you want to grow or shrink a droplet of these particles gluing together because of their attractive interaction, particles must arrive from the boundary to the droplet or they must go from the droplet back to the boundary. And so there's a non-locality which, as we will see, makes the problem challenging and interesting at the same time. In particular, we will see that when this gas is building a critical droplet, and from the first lecture you know that there is something like a critical droplet that is the threshold for nucleation, then it turns out that this droplet can have all sorts of interesting motions of particles around the border. And that will turn out to lead to a shape of the critical droplet that is more complicated. And so we will have to get into issues of geometry of the critical droplet as we go along. Okay, so that is the setting that we're in. And now I come to specifying what exactly the model is that we're going to talk about. And there are four ingredients to defining Kawasaki on this finite box with what we would call an open boundary where particles can appear and disappear. So we're going to start in two dimensions. At the end of the lecture I will also tell you what we know when you want to do this in three dimensions. We're going to start by imagining a large square box centered at the origin, which we call lambda, and we're going to think of delta minus lambda as the interior boundary of this box. There is no exterior boundary of the box because that is where the gas reservoir would be, but we only think of a finite box. And this boundary is special. Now, if we want to describe a lattice gas configuration, then we have a configuration space that in our case now would be 0,1 to the power lambda, because 0 means the vertex is empty and 1 would mean the vertex is occupied by a particle. So this describes all possible arrangements of particles in your finite box lambda, and eta is a collection of values eta x, where x runs over this box lambda. And that is our configuration space. Right. So here's a picture of such a lattice gas configuration, and here you would see that the ones indicate the location of the particles and the zeros, the vacancies or the holes. And you see that there can be only one particle at a time, so there's exclusion. Two particles cannot sit on top of each other. And the boundary of this box, the internal boundary is important because that's where particles can actually disappear or reappear, but once they are in the interior, they can only move around. And I'm going to specify that in a second. So having defined our configuration space, we need to define our Hamiltonian. So this is a function that to every configuration associates an energy. And we're going to write down the lattice gas Hamiltonian. And the lattice gas Hamiltonian is written up here, and it consists of two terms. There is a term that captures the interaction between neighboring particles. So if two particles are at vertices x and y that are neighbors of each other, and they're both occupied, so they're both one, then they have an attractive, they have a binding energy minus u, and this energy is negative. So there is a kind of glue between particles in the sense that when they're next to each other, that then the energy of the configuration is lower. And I do this for every pair of neighboring particles. So this is the attractive part of the Hamiltonian. And then there is a part that is coming from an activation energy. I need to give every particle in the box an activation energy. So every particle makes the energy increase by a parameter delta. And this is necessary in order to represent the effect of the reservoir. I imagine that I have a gas and I look in a certain window and that would be my box that I want to look at. And the outside reservoir I have removed. And I've said I'm going to replace you by a condition that says you need some energy to bring a particle into this box. So this delta essentially is a parameter that controls the density of particles inside my box. And so this is a classical lattice gas Hamiltonian with an attractive interaction and an activity part. Sometimes it's called chemical activity in order to be able to control the density of particles in your box. So there are two very important parameters in this model, u and delta. And we will play with them as we go along. They're very important. Now we need to next talk about dynamics. We need to say, okay, these particles are going to evolve. What are we going to allow? Well, as I already indicated before, we are going to allow particles to hop. So a particle can hop from one side to a neighboring side. So that essentially means that between these two neighboring sides, zero and a one are going to be interchanged. That respects that captures the hopping. And there's this special role of the boundary saying that at the boundary, so in the internal boundary of my box, particles can be created or annihilated. That would mean that a zero can turn into a one and a one can turn into a zero. So going back to this picture, inside the box, the ones can only move around. And they can also move to the boundary. But on the boundary, there's the initial, the additional phenomenon that a zero can turn into a one and a one can turn into a zero, which we would associate with thinking that the particle has entered or left the box mimicking the effect of a gas reservoir, which we do not have, but we mimic it by this boundary condition. And now, having specified what possible moves are allowed at all, we have to say with what rate these moves are going to happen. And Kawasaki dynamics is nothing other than a metropolis dynamics based on the Hamiltonian function that we have defined at an inverse temperature beta. So that means that you go from a configuration eta to eta prime, when in order to do so, you respect the two allowed moves. And you do that at the rate that is e to the minus beta times the positive part of the energy that you would get after the transition and the energy that you have at the moment. So this is the standard metropolis analysis dynamics that you get once you have specified a Hamiltonian and it has an inverse temperature beta in as a key parameter. And again, we may think of the outside of a box really as an infinite reservoir, which has all been replaced by a particular boundary condition. And we can think of e to the minus beta delta as the density of the particles inside the box. So the delta, capital delta controls the density of the particles in the box. Right, so we can play with that particle as a control parameter. Right, so for instance, you see that the moment you want to make a particle enter, that will raise your energy by delta and that will only happen at a rate e to the minus beta delta. And that's why e to the minus beta delta is essentially the density of particles in the box. Okay, now we're going to look at this problem in a certain regime and this regime will be a regime of low temperature. So beta goes to infinity. So it means all upward energy moves are costly and are very difficult to do and all downward moves are for free because they occur with rate one. So climbing in energy is something that is difficult for the system. And that is typical for a low temperature system. And we're going to pick our activity parameter delta in between u and 2u. And we will have to see why that is an interesting choice. And it turns out that this parameter choice, you can be any positive number you like, but as soon as the delta is between u and 2u, it turns out that you are in a metastable regime. It turns out that we will have to see why that is true, that your gas is sort of super saturated. It likes to condense because the, you know, if it creates a big liquid, then it will really lower its energy because delta is less than 2u. But it's also not terribly super saturated so that one would not have a situation where particles would come in the box very rapidly and sort of immediately condense because that would not be a metastable situation. That is certainly a physically reasonable situation but it doesn't correspond to metastability. So we will have to see why that regime really makes sense and why that is interesting. So it's a regime where, yes, the condensation will happen but it will be difficult for the system to do. And it turns out that there is a certain integer, which we call LC and which is the upper integer part of the quotient of u divided by 2u minus delta, that will play the role of what we will see and what we will call the critical droplet size. And I'll have a lot to explain to you where this comes from but I wanted to mention it immediately here. And the fact that we pick delta between u and 2u means that this number is not 1 and it's not infinity. It's some number between 2 and infinity so we're talking about some interesting length that is not infinity and that's also not the length of a single particle. Okay, so we will see where that comes from. Now there is a little non-degeneracy assumption that we have to assume. We would like this quotient not to be an integer itself so that when you take the upper integer part you really move a little up. And this is in order to avoid certain degenerities and ties in the computation that would unnecessarily complicate the computation but wouldn't really contribute anything interesting. Okay, now I'd like to start by giving you a little bit of a feeling where this number LC comes from and it is also going to be our first steps towards understanding what really is the threshold in the system, in the model that we've been describing, that prevents these particles to just come in and nucleate immediately. And the way we're going to get our hands at that is in two different ways. One is a static computation and one is a dynamic computation. And I want to explain both of them to you now to give you a feeling for that. So what I would do is to say, suppose I would like to take an L by L droplet. So in this picture here it would mean an L by L square filled with ones and all the rest zeros. So if I would ask myself what would be the energy of such a configuration? Well, that would be easy because you say in an L by L square there are L squared particles, each gets an activity delta. So there would be a contribution plus delta L squared to the energy of such an L by L droplet. However, in that droplet there are lots of particles sitting next to each other and if you count how many pairs of particles sit next to each other in an L by L droplet, then that's exactly two L times L minus one. So those are the number of faces at which these L squared particles touch. So the gluing energy will be minus u times that number. So this is the energy of an L by L droplet of ones inside this box and the rest zero. And if I plot this as a function of L, it is a parabola that starts at zero, then goes up, reaches a maximum at the value u divided by two u and is delta and then goes down and actually goes down and becomes very negative because of the fact that delta is between u and two u. And so this is sort of telling us that the hardest droplet to create is the droplet with the highest energy because the metropolis dynamics has trouble moving up. And therefore a droplet of size u over two minus u squared should be something like a critical droplet. That should be something like the barrier. However, we agreed that this number would not be an integer. So LC is just the first integer on top of that. And the reason why we do that is to avoid non-degeneracy. And so the LC is just to the right here and it's essentially telling us that every, if I take an LC minus one times LC minus one droplet, it's still going to be subcritical. It will be before the tip and an LC times LC droplet will be just beyond the tip and would therefore be called supercritical. So by this very simple computation, just looking at some energy computations and sort of thinking that probably the lowest energy that you can get is by forming some quadratic droplet. We will have to talk about that later. You see that this number LC plays a key role because it is the size of the square that you need to conquer in order to fill your box because L is equal to zero would correspond to an empty box and well a very large box would correspond to a very large L and then you would have a very negative value and you would be very supercritical. So there is something about this LC that seems to be right. But this is only a static computation. I haven't done anything fancy yet but we're on our track to beginning to understand what a critical droplet should be. Okay, well I can sort of discover this same critical barrier also when I start to think a little bit of the dynamics. And so let me again do an argument that should convince you that this LC is a really important object by saying let me take some kind of droplet of size L and let me try to grow on the side of this droplet a bunch, you know, a whole bar of particles. Suppose I want to have an L by L droplet and I want to add a bar to this droplet. I want to grow it by a bar. What happens? Well when you make a particle enter the box and your energy goes up by delta, when the particle glues itself to the boundary of this droplet you go down by minus U, then you need to bring in the next particle, you go up in energy by delta and then you glue that next to the particle that you had. So this would be a protuberance and you add it there and you go down by 2U and you continue doing that. Each time you bring in a particle you go up by delta and each time you tuck it in the corner to the next to the previous particles that you attached on you go down by 2U and when you're done you will have added a bar to your droplet. So this one is this one where I've been adding a bar and if you look at this picture then the highest energy is this particular droplet here and it is an energy 2 delta minus U higher than what I started out from and if so this is the barrier that you have to overcome when you want to grow a bar. If I read this whole story backwards and I say well how much do I have to go from here to there in order if I would remove this bar because if I remove it all the arrows turn around then if you look at what is happening then you have that energy here. So there is a cost for growing a bar of length L and there is a cost for removing a bar of L and lo and behold those two will become equal exactly when L is again U over 2U minus delta. So also in the mechanism of what it costs or takes to add or remove a bar you see this thing this number again coming back and again we have decided to take as a to assume that this is not an integer so that LC is the upper integer part so that these things are cannot never form a perfect tie in order to make things a little bit easier. So with that I have explained both in a static and in a little bit of a dynamic way why this critical length this critical droplet size LC is so important because it really represents the size of a droplet that is the hardest to create that is the barrier between having an empty box and having a full box. And my last slide for the break is this slide here in which I'm announcing what we're going to try and do with this information we're going to think of starting the system off with an empty box there are no gas particles around we think of this as you know a gas situation it's very very dilute. Then you will see that particles start to come in a particle comes in alone probably it leaves the box before anything else happens and if I wait for a very long time I will see oops there are two particles in the box maybe they go and sit next to each other maybe they dissociate again and leave the box and if you wait long enough it will happen sooner or later that you create a critical droplet and once you have done that you will see that the system energetically goes over the hill and that this critical droplet will grow and grow and grow and eventually will fill all of the box and when all of the box is filled we say bingo nucleation has happened and I have a liquid so that is the kind of thing that we're trying to do and I will have to go after the short break into details about what exactly this critical droplet is and there will be things going on that we call protocritical droplets so droplets that are almost critical there will be canonical versions of those non-canonical versions of those there will be motion of border of particles around the border of the droplet and I will have to go deeper into the geometry of this critical droplet we know that it is something like an LC by LC particle but roughly and I will have to explain that and then when you go over the hill you have your nucleation and we will have to come to grips with what exactly this critical droplet is and there's a story to tell and how long does it take for the system to create this critical droplet because that will be the most difficult thing to do and the time it takes to create that critical droplet will really be the nucleation time and a few more properties around that so that's what I'm going to zoom in then after after the break so have a let's have a small break. Okay Frank there's one question from Jens asking for confirmation of his understanding of these particles visiting all these prototypical states he asks he says well if LC is not an integer we never quite reach a stable state unless we keep changing along the boundary is that what is meant by non-interaction? LC is an integer no LC is an integer so it means that an LC I think he meant U minus you know the fact that you had to move slightly away from that critical energy yeah I did that in order to avoid ties it's like you you know it's some kind of arithmetic non-degeneracy that you want to do and and I will go much more deeper into what is this a prototypical droplet and what does it look like and and what is this motion along the border exactly and okay yeah. There's a question from Tomasso he's asking about the density of particles within landers kept fixed is then the case that when a particle is created automatically one is erasing along the boundary? No I do allow particles to enter and that doesn't mean that another particle has to go away when I said the density is e to the minus beta delta it means the the average so it means if I look at a vertex then the probability that there will be a particle there will be in equilibrium e to the minus beta delta but I'm starting from a non-empty box this box has to fill up nucleate and then it's completely full but in you know this e to the minus delta is basically the density of the reservoir that I imagine around myself if I would have a reservoir of density e to the minus beta delta it would shoot particles into the box in a way that is very similar to what my boundary is now doing I I'm living on a finite box there's no outside will because it's much nicer to work on a finite box but the effect of this delta is as if I would have a density reservoir that has that density e to the minus beta delta okay okay I think maybe that there are more questions but I suggest we we take an honest to God three four minute break and then we'll come back and deal with the questions that are here if that's okay okay very good several questions here so okay in no particular order Francesco is asking if you have more than one species of particle could metastability caused be caused from repulsion between the different states different species interesting question it is possible to deal with more than one particle I had a PhD student alessio Troiani who looked at that case sort of black and white particles and depending on the then there's a you one and you two and there could be a delta one and a delta two and the the the possibilities become very rich you can have things like checkerboards and and various things so yes we can do that but it also becomes geometrically much much harder and you can allow repulsion and and attraction but you have to have enough attraction so that the system really wants to nucleate otherwise you're not in a metastable regime see Roman had a question about the sort of separation of the the energy increase when a particle enters the box and the decrease when it attaches he says I have difficulty with the delta and the computation because of the particles are already there I don't need to grow the energy by delta well yeah well you you you pay you pay the delta when you get in and then one the particle moves towards from the boundary to this box nothing happens so so when I say this it's mean it really enters at the boundary and then any of the traveling towards the box it doesn't do anything I haven't indicated here it's a good question and then you go down only by you when you really attach it and you need to do that if you want to grow a boundary so I'm not looking at what here in this picture at what the particle is doing when it's already in the box and travels to the boundary of the of this thing I'm only looking at the moments when something really happens in the energy landscape and mark homes by the way can people unmute themselves there press because if they want to ask directly then mark is asking about the growing and shrinking squares and he's asking about a shape theorem for a square yeah we will see that because we're in a low temperature setting droplets always want to be as close as possible to to a current square and this comes from a discrete isoparametric inequality discrete isoparametric parametric inequalities are not easy they're actually harder than than the than the usual continuous ones that we know about a sphere with a maximal you know volume and minimal surface and stuff like that but this plays an important role and especially when we're in a very low it's very hard to climb an energy in the system tries to do this in you know the easiest way possible and that's why this dynamics has a tendency of want to grow droplets in the form of squares quasi squares where you are adding a bar and that again other bar because otherwise it's energetic and it's very difficult to do that because it's very difficult to do that and otherwise it's energetically too costly but that's a key point in in the later analysis so you may have indirectly just dealt with this shenji yang was was asking why square why not rectangle because they look like they're locally minimum too yeah they are locally minimum but but globally not it's much better if you have a very elongated square if you would sort of break it off and put it into a more square like you get a lower energy because you have much more you don't change the number of particles but you you you you benefit more from the gluing but these are delicate issues it's not immediately obvious and it's hidden in the isoperametric inequalities okay yeah roman had a follow-up i should have picked this up earlier he was saying his you may have a lot of particles already in the box it's not one at a time uh yes it's very hard to create them because every particle that you bring in the box makes the energy go up but yeah and in principle you could say well i have three or four particles floating around independently one of the things that you will have to show is that the system doesn't want to do that it doesn't want to create create six droplets it wants to it's it's so hard to create droplets that the the easiest thing it can do is create one droplet and grow this layer by layer and that's what is actually happening okay so shall i move on um maybe we didn't deal with everything i'm gonna try to answer some of these uh privately okay okay so what are you okay but there's also time after the second part i'll be happy to to continue that okay so we now need to go a bit more into detail about what this critical droplet is because that will be crucial for the fine analysis that i'm about to present to you and uh we already know that somehow this critical droplet is a bit like uh it's somewhere in between an LC by LC droplet and an LC minus one by LC minus one droplet so what exactly is it and this turns out to be an interesting thing so we're thinking about two configurations the empty box and the full box and they represent for us the gas where there's nothing and the liquid where everything is filled and the meto-bestable regime is such that it's hard for part it's hard to go from the empty to the full box but the full box uh is really the lowest in energy as we already saw from this parabola uh calc computation and we're asking ourselves what is the settle point between these two configurations what is the lowest set of configurations that with your allowed moves you have to move through in order to go from empty to full and i'm going to give a description the description is requires a bit of attention because there's something interesting happening here with covered psychodynamics that has to do with motion of particles around the border of the droplet and this is an interesting thing and it also provides us with uh with a level of richness and mathematical challenge that is nice i'm first going to talk about uh basic protocritical droplets protocritical means i'm almost critical i'm still waiting for one particle to arrive from the boundary of the box and i'm over the hill and there are certain nice protocritical droplets uh one is an lc minus one times lc quasi square with a single protuberance attached to the longest side and there's also a version where this protuberance is attached to one of the shortest side so either the side of length lc or the side of length lc minus one and here's a picture of an example of a simple protocritical droplet which we call canonical protocritical droplet and this shape is one of the shapes that is lying on the boundary between uh this empty on the saddle point between the empty and the full set and one of the exercises that alien and i have prepared is for you to do that computation and and argue that this is a droplet that for its volume has its uh has its smallest uh has the smallest uh lowest energy given uh the volume that it is given here i mean if i want to fill my box all particle numbers will sooner or later have i have to go through i have to have one two three four particles until i have filled my box so i'm looking for shapes that for a given particle numbers have a lowest energy and there are discrete isoparametric inequalities not difficult in two dimensions but non-trivial in two dimensions that will tell you that this is one of the shapes that has the lowest energy given its volume now what happens is that before the next particle comes in uh it's possible to have motion of particles around the around the border of this droplet this particle can start to move around but it turns out that when this particle is close to the boundary here the corner it may allow other particles to sort of land on top of it and it's possible that this protuberance facilitates the motion of particles on top to move around the corner uh to the other side and this is an interesting phenomenon when you first see this you say oh it's not possible but actually there are some pictures in the exercises that show you that this is possible so it's it's possible uh that particles start to move around the boundary of the box and what happens is that since delta is larger than you all paths that are such that you never exceed your current configuration in energy by more than you will happen and will easily happen before your next particle comes from the boundary of the box so this droplet starts to wobble around uh through motion of particles around the border even before the next particle comes in and tries to attach it to itself and in fact um what what happens is that what happens is that uh this set D has a certain structure I will talk about it uh in a second and then once uh so all these protocritical droplets play a role and then at some point uh there's another free particle coming in and saying yeah here I am I I go to the protocritical droplet and I've touched myself and I begin in applause because I say to everybody we have now gone over the hill all of us together and that is what what is happening and this set of protocritical droplets has a certain dumbbell shape there are the simple ones that are quasi squares of size length lc minus one times lc plus a protuberance and then there's a whole bunch of things that you can obtain by sliding particles around the boundary of the droplet and there is a set that one can describe and it's not easy and I decided not to go into details here and this is done in in an exercise though so so that you can get a feeling for this interesting phenomenon of motion of particles around the boundary of the droplet which is possible because you can move uh you can make moves of cost you before the next particle uh comes into the story okay so there is a there's a rich structure which we have under control and so at this moment we have we are done with really understanding the geometry behind our problem there is an empty set it has a certain uh it has it has zero energy and then there is the full set where the box is entirely full that would correspond to my liquid and I've argued that for for very large beta which is the regime that we're looking at very low temperature the barrier between these two is really given by a whole family of critical droplets that are all lying on the settle point between these two and so here again you see this paradigm picture where we're already when I showed this for the first time I said here this is very naive these things have a structure and indeed the critical droplets here they have a structure it's a whole set of droplets that you can describe geometrically and this is the barrier that the dynamics has to go over so we have to ask ourselves how costly is it to go up this amount of energy in order to go over the hill and that is what we're going to state now in in a theorem and this here comes the main theorem for for this particular model and it is built on the tools that I've been describing this is Dirichlet and Thompson principle and test functions and test flows and isoparametric inequalities and various stuff and it says after all the smoke has cleared up the average time that it takes you to go from an empty box to a full box so this is what I would call the average nucleation time is of a very simple form apart from a pre-factor that is you know tending to one there is a constant here and there's an e to the beta times another constant and this constant here this gamma is exactly the depth of the hill that you need to climb out of when you come from empty to full so it's exactly the high difference between this one and that one that is what what gamma is and that is a computation that you can easily perform this energy is zero this is the energy of the protocol droplet plus the free particle that comes around with a flag and say I'm here I'm here I'm helping you over the hill and you can do a straight computation using the lattice gas Hamiltonian and you find this number exactly in terms of your parameter LC so this number is completely explicitly calculable and it is the solution of an isoparametric inequality it's a minimizer consisting of the whole class of these protocol droplets which have a rich structure and then there is a pre-factor which you can for which you can write down a variational principle coming from the Dirichlet and the Thomson principle and this factor is for finite lambda not so easy it is some complicated number however if the box is very large this number simplifies and it is scales like log lambda the log time the volume divided by lambda and if you multiply it with this factor it will converge to a limit so for a very large box I know what this pre-factor is and this number one over LNLC is given by this number here and this is exactly counting the number of protocol droplets that that we have been talking about and that is very reasonable because each of these protocol droplets could be your gate to go from empty to full and if I have twice as many gates to go through my average crossing time should go down by a factor two so the fact that this pre-factor picks up something like one minus the cardinality of the number of gates that I have available is very logical and if you go deeper into the geometry this turns out to be given by a certain sum that is explicit here and there is a scaling factor that depends on the volume of the box and this has something to do with capacities of simple random walk having to travel from the boundary to you know in the neighborhood of a critical droplet and that has a very nice scaling and you see that since the critical droplet is can appear more or less anywhere in the box there is there's a proportionality factor one over lambda if I would bring it to the right so there is a multiplicity here that you take into account and the logarithm has something to do with the fact that two dimensions is a critical dimension for random walks so the story here is yes we've been successful we have a very precise description including an energy barrier and something that has to do with geometry of critical droplets and counting and and stuff like that and so it's a happy ending with a very sharp result that is as precise as as you may wish at this point so so that's a good story and there there is the geometry of the critical droplet coming from isoparametric inequality what is the given a certain volume the the the the droplet with the you know the lowest energy and what if I want to climb from empty to full what is the the lowest threshold well there are many of them and we have captured them in a computation that goes into the deep geometry of the problem some of which is explained in the exercises so this pre-factor is usually not easy but we have a good thing here that is the if the box is very large we have a very good description of this pre-factor so nucleation in a large box is now really well understood to into a lot of quantitative detail so I'd like to have a few more slides in which I'm going to tell you something about what happens when you try to do this in three dimensions so suppose you would say well I would like you to take a large cubic box in z3 so I'm going to take a large cubic box in z3 and I'm going to take a large cubic box in z3 like you to take a large cubic box in z3 and again same story in our boundary of the box particles can can be created and annihilated once they walk in the interior of the box they need to be preserved there's an activation energy delta there's a an an interaction energy between neighboring particles of size of strength minus u a binding energy so I'm not changing the model I'm not changing the dynamics the only thing that I change is that I'm now living on a cubic box rather than a square box and the question is very legitimate to say could you deal with that too what is going to happen and it turns out that yes you can deal with that too and we will pretty much find the same result there are however a few things that become a little bit more complicated in three dimensions which I want to quickly point out two in particular it turns out that there are two lengths that start to play a role one is u over 3u minus delta and the other is 2u over 3u minus delta remember that the critical length that we had in two dimensions was u over 2u minus delta upper integer part but it's now two things that start to become interesting and I don't want to go in in in too much detail here but I do want to show you some of the richness that is here but this is becoming a bit more complicated now we will have again protocritical droplets we will need now to solve an isoparametric inequality a discrete isoparametric inequalities in dimension 3 which is really difficult there's a beautiful paper by Alonso and Serf in the late 90s solving some of these issues and it's it's really combinatorically beautiful and challenging to do that and in three dimensions that is not easy and it turns out that if I take the analog of the critical droplet size in three dimensions that would be called an mc that again something like an mc by mc by mc cube is roughly the critical droplet but it's a bit more complicated there where it it can happen that on this cube you actually attach a two-dimensional critical droplet on one of the faces and then on one of the faces of that you can stick a protuberance I'm going to show you a lovely picture that that Francesca Nadé made a number of years ago and so it turns out that there are actually two times two length scales playing a role a two-dimensional length scale and a three-dimensional length scale the three-dimensional length scale describes the size roughly of your cube and the two-dimensional length scale describe what a two-dimensional critical droplet attached to one of these faces should come and there are certain delicate things about some parameter that could be either zero one depending on some arithmetic properties of these numbers whether they're compatible or not I don't want to go into the detail but I wanted to show this to you to to see that there is a richness coming up and at the same time this can be dealt with you can pin it down after a lot of thinking so let me first show this picture so this is what a critical droplet would look like it's a it's a it's a quasi-cube work that and then on top of that there is a critical droplet that as we saw it in two dimensions attached to it and then here is this guy that comes from the boundary and and where's the flag and say I'm about to arrive and we're all going to go over the hill by the time this particle tucks itself into the corner here because then this thing will be so stable that the rest of the particles will rain on top of it so you see here a three and a two-dimensional structure in this thing and this is all captured uh in the description of a whole bunch or you know a rich set of protocol droplets and again the energy barrier that this object represents is computable it's not so simple anymore as before and there are all these two lengths around I don't expect you to be able to to grasp this you know during this lecture but the message is you can do it and there is a lot of richness and there is now two scales a three-dimensional critical length and a two-dimensional playing role so and here's a beautiful picture of one of those critical droplets that you that would represent the barrier between the nucleation for the nucleation problem in dimension three so I'm I'm going to finish now I'm almost done again we will have a result of a very similar nature as we have in one dimension the average nucleation time to go from empty to full to go from a gas to a liquid is apart from a negligible pre-factor again of the shape a constant times e to the minus beta gamma this is the barrier that you have to overcome we wrote down a formula for this here it's more complicated in three dimensions than in two dimensions but it's all explicit and computable and there is a pre-factor that again is the solution of a variational formula and this comes out of again applying capacity estimates and Dirichlet principles and Thompson principle and it turns out that by the sheer complexity of discrete isoparametric inequalities in dimension three or higher it is not possible to really write down a a simple formula for this object not even when when the box is getting very big we know that it's going to grow like one over the volume but we only have bounds on this number there's something intrinsically complicated in this pre-factor even though we can write down variational formulas for it right so that concludes my story here the take-home messages yes for kawasaki dynamics in two and three dimensions you can make things work you can build the general theory you in the end end up getting a very sharp estimate about the mean transition times it comes through computations that involve capacities which some of which you will see in in the exercises and there's there's a happy ending to the story in the sense that you can all make it work and there's a beautiful theory behind it there's a geometric richness deeply linked to two-dimensional and three-dimensional discrete isoparametric problems that you have to solve and that is mathematically nice and it is and this manifests itself into all the parameters this gamma and this pre-factor kappa in order to describe it so it's it's it's really for kawasaki dynamics the you know the sharpest result that that that has been around that that the sharpest result that that is around and nothing has been improved since that yet so it's a very successful computation based on nice mathematics and nice nice behavior so thank you very much so uh sarah will now unmute us so we can uh thank frank for a terrific lecture extremely clear and beautiful results thanks
Metastability is a wide-spread phenomenon in the dynamics of non-linear systems subject noise. In the narrower perspective of statistical physics, metastable behaviour can be seen as the dynamical manifestation of a first-order phase transition. A fruitful approach to metastability is via potential theory. The key point is the realisation that most questions of interest can be reduced to the computation of capacities, and that these capacities in turn can be estimated by exploiting variational principles. In this way, the metastable dynamics of the system can essentially be understood via an analysis of its statics. This constitutes a major simplification, and acts as a guiding principle. The setting of potential theory relevant for interacting particle systems is that of reversible Markov processes. Within this limitation, there is a wide range of models that are adequate to describe a variety of different systems. Our aim is to unveil the common universal features of these systems with respect to their metastable behaviour. The first lecture will be an introduction to metastability. In the other three lectures, we will focus on three examples in detail: - Kawasaki dynamics on lattices. - Glauber dynamics on random graphs. - Widom-Rowlinson dynamics on the continuum. Reference: Anton Bovier and Frank den Hollander, Metastability -- a Potential-Theoretic Approach, Grundlehren der mathematischen Wissenschaften 351, Springer, Berlin, 2015.
10.5446/55661 (DOI)
Full screen mode. Okay. There we go. Right. So welcome back everybody. On Tuesday, Aynan and I presented an example of a... Oh, sorry. Okay. We presented an example of Kawasaki particle hopping dynamics on the finite piece of lattice and we showed that in a certain made to stable regime of low temperature and low density. There is nucleation of a gas into a liquid that and that we could actually really quantify in a detailed description. This made a stable crossover and part of our, you know, a lot of the effort was controlled around understanding what exactly are the critical droplets that form the barrier for the nucleation and for Kawasaki dynamics. And that is a certain richness and this is all, this is always very important because most of the action also in the variational principles that you use to do computations are all around what happened, what is happening near the critical drop. They are the most delicate thing that we need to understand. So here we're going to move to a completely different example. We're going to look at Glauber spin flip dynamics on finite but large graphs. And we're going to be interested in a different made to stable regime, not the flow temperature we're going to consider any subcritical temperature, but we're going to look at the limit of large volume. And it's a specific other type of made to stable regime that we're going to look at. And our task will again be to capture and to quantify a meta stable crossover times. And then we will see there were there's a new ingredient coming around namely we will need to do a bit of homogenization, because now are are the environment in which we're doing is a random graph it's no longer a regular lattice so other things are going to come, are going to come around. Okay, so let me try and define the problem for you. And this is going to come again in four ingredients. The first thing that we're going to start out with is by saying well we have a graph, a graph G, and a graph is nothing other than a set of vertices V, and a set of edges, he that connects pairs of vertices. And we're going to assume that this graph is connected we don't want to fall apart into two or more parts. And we're going to imagine that on every vertex, there is an easing spin that can take the value plus one or minus one. And that these easing spins are going to interact with each other as easing spins usually do along the edges. So if there's not an edge between two vertices these spins at these vertices are not interacting and if there is, they are. So we always need to begin by defining a configuration space, while the configuration space is going to be now omega is the set of is a set minus one plus one to the power the vertex set. So this respect this is all possible arrangements of easing spins on the vertices. So the easing spins by Sigma, so I'm in for Kawasaki dynamics I was using the symbol, ita, because they are the values were zero and one occupied or or empty and here to flag that we're going to use a different symbol. So our configuration space is as it is given here. Then we need to define Hamiltonian so we need to assign to every spin configuration and energy, and we're going to use the standard easing Hamiltonian, which consists of an interaction part where you say, take two easing spins and they interact with a ferromagnetic interaction strength, Jay, which is strictly positive a parameter we can play with, and the interaction is only present along the edges in the graph. And then we have an external magnetic field, each which we're also going to take positive. And this isn't this this is an interaction that every spin has with the magnetic field so spins prefer to align with the magnetic field. And low and you know lower energies are more favorable in the gaps measure than higher energies. And also if spins are parallel then this will make this some big and with the minus j it will make it small so this Jay really has a tendency of wanting to align spins where it can. Okay, so we have our configuration space and our Hamiltonian. And now we need to define spin flip dynamics. And for this we're going to again use the standard metropolis dynamics associated with the Hamiltonian that we have just defined. And the allowed moves in this case are moves where a single spin flips either from up to down or from down to up. And you will accept a possible flip at a certain rate that is e to the minus beta times the energy difference that this flip would cause. And there's a parameter beta again in front this is the strength of the, of the, you know, the, the, the, the, the, the, that's a strength parameter in your dynamics that tells you how difficult it is to climb up in energy and how easy it is to go down in energy. And this plays the role again of inverse temperature so that's very similar to what we have seen before also for Kawasaki dynamics. And then there is a Gibbs measure, which is e to the minus beta h sigma on that configuration space and you need to normalize it by a partition function. And the reference measure that we're going to take so the measure that we take if there would be no interaction at all is in this case going to be simply the counting measure so that's why there's no measure here we use the symbol q before we're not that's going to be the same. And it is a well known fact that this equilibrium measure is the reversible equilibrium of this metropolis dynamics because that's automatic from the way the this the metropolis dynamics is defined. So that puts us into in business in introducing the key ingredients for our model again, as we should always do. And there are, again, our three sets that we are interested in a major stable state a stable state, and a crossover state. And we will have to figure out what these states are in this particular model and that's not so obvious at this stage but we will have to be looking for them, and we will see that they will have something to do with magnetization that takes place in the system so we will, we will slowly get there. I would like you to be fully aware of the fact that because the graph is non homogeneous we're dealing here with the non homogeneous setting and that is challenging. So there's nothing periodic here, we really have to come to grips with the fact that the direction is only working along the edges and if your graph is very in homogeneous that will will have an effect. And, and in fact, later we're even going to take this graph random. So then it means that you're you're you're essentially talking about an easing model in in a random environment, and we will come to that as we go along. And I want you to show this paradigm picture that you have seen several times before already so our state space. Well again I'm drawing it in a one dimensional one dimensional fashion which is of course ridiculous but it's it's nice to keep this picture in mind all the time. And there will be some, some, there will be a major stable across overstate at a stable state and we want to somehow imagine what is going on. And what will happen is that this stable state will correspond to a certain positive magnetization that the system would like to have as it is in equilibrium and we're going to start the system off with too small magnetization in fact a negative magnetization and we need to wait until it goes over the hill and because the magnetic field external magnetic field is positive. It likes to have a positive magnetization but it needs to go over a barrier to do that. And that's what we're going to try and quantify. On top here I have written not energy but free energy, and that is because this picture is no longer only looking at the Hamiltonian only not only at the energy, because we are going to be interested in very large graphs. We will have to deal with entropy so entropy is going to come into the game how many, how many configurations would correspond to a certain magnetization and there's entropy entering into the game and that's why it is now a game of free energy so competition between energy and entropy. And again, this is still a rough paradigm and you will see this picture coming back into much more detail when we really sit down and say what is happening. Okay, now, let me very quickly remind you just to be self consistent of again the key formulas as they come out of this very important paper by Bovier Ecovigar and client that I've already mentioned a few times. We're going to think of P sub M to denote the probability distribution on path space of our global dynamics on this configuration space that carries this graph as as the basis. And suppose you start in a certain state M which we will have to specify again the question is how long does it take before you go over the hill. And there is this nice formula which should be true in made to stable regimes that this is all essentially a story about capacities that we need to to estimate. And we will have to quantify what this M and S really mean and we will get to that. In this case, we're going to be interested in large volumes and in principle arbitrary temperatures as long as they correspond to a situation where there is a phase transition because that's when made to stable behavior can and will occur. Right, so a very quick reminder is that this capacity is given by this Dirichlet principle and for the specific easing spin flip, a Glauber dynamics that we are dealing with this Dirichlet form takes a particular form it's always a square of differences over test function and here, there is a this function in front of here that comes from this metropolis dynamics, and you're, you're looking at all possible transitions where you flip a single spin that's what this twiddle means or twiddle means that's the allowed is an allowed move and we're only thinking of moves that are corresponding to single spin flips. And here you see that if you want to minimize this Dirichlet form, you want to really make the action where this Hamiltonian is large because then this term is small, and then you can permit yourself to do something here. Whenever you go away from the top. It gets this factor gets much bigger and you want to essentially kill this term by making it almost like one minus one or zero minus zero. And there are things that we talked about in the first lecture that there is a everything that is really happening in the Metastable regime revolves around the where this this Hamiltonian is maximal. So, the principle Thompson principle. Also very simple, where which is a supremum, and it takes this particular form here where now the supreme is taken over the unit flows so all flows along edges in the graph, such that the flow in out of M is one into s is also one and the dynamics has a zero sum for the in and the outflow. And so those are the tools again that that are crucial for for getting your hands on on the problem and again here also for Glauber dynamics on random on graphs. This is this continues to be, you know, the key tools with which you're going to play. So, now, I'm going to do two things I'm going to first start with a very simple graph, which is the complete graph. And this leads us immediately to a very famous and old model the Curie vice model for for spin flip with spin flip dynamics and I'm going to first tell you what we know there this is a very classical situation, but we must first understand this classical situation before we are ready to go to more complicated graphs. And in fact, in the after the break, I will continue with the air there's rainy random graph and then at the end of the lecture I will even quickly mention a few other random graphs that people have have looked at that they're also quite quite interesting. So, we're having the complete graph on and vertices capital and vertices. And this means that in our easing Hamiltonian, everybody is interacting with everybody else. The images in the graph are there so it's a it's a really very simple situation. And let's see what we know about that. This is an old problem that has however only really been pinned down in full detail in in the works of of bovie of bovie. Since every spin is interacting with every other spin it is natural to to make a choice for this interaction parameter we're not going to keep it fixed because we want to let n go to infinity we're going to make it one over and where n is the volume of the vertex sets counting the number of vertices. We do that to ensure that the total interaction of a spin with all the other spins is still of order one, because we want to pass to limit and to infinity and we do not want that to cause that spin is having an infinite interaction with everybody else. So this choice is very natural. And we take one over and rather than maybe 10 over and because that's just a trivial scaling. Since we are in a mean field situation where everybody, every spin is interacting equally strongly with every other spin. We are in a very nice setting that you can use a lumping technique in which rather than us following the dynamics of the full configuration. We are only going to follow the dynamics of the magnetization. So we're going to project down from a very high dimensional problem to a very low dimensional problem, because there is so much symmetry in the problem that that works and this is of course very standard this is the usual thing that happens in mean field theory but let's see how that works out in this particular setting. Right. So, the quantity to look at is what we call the empirical magnetization. You sum all the spins and you divide by n. And this is a number somewhere between minus one and one, and it is expressing what the net magnetization is of my configuration. And since the spins, when they flip they change from minus one to plus one. This actually lives on a fine grid of size two over and in this interval between minus one and plus one. And if you look at what your dynamics is really doing your spin flip dynamics with the metropolis rates for the easing spin Hamiltonian on the complete graph. Then it turns out that this quantity over time is actually evolving as a continuous time random walk that moves on a very fine grid in this interval. And that is behaving as if it's in a potential so if the potential goes up, it has trouble to to go up and if the potential go down, likes to go down. So if you do a computation and this is part of of of the exercises that come with this course with this lecture, then you will see that it is a random walk on that fine grid in a potential that is given by a very explicit function and this function is actually rather than what would be called the free energy per vertex. And this has an energy part where you see that there's something quadratic coming from the interaction between spins. There's something linear coming from the interaction with the magnetic field. And there is an entropy term that keeps track of how many spin configurations actually correspond to a given value of the magnetization and this is crucial. You change the magnetization then you change the number of configurations that correspond to this magnetization, and every magnetization has its own entropy, and you need to compute that and it's a very simple computation when you go down from this big spin to this one dimensional magnetization space that this is the factor that comes about. And this quantity being an energy and an entropy term deserves the name to be called finite volume free energy per vertex because we're really normalizing everything by the number of vertices. So this magnetization is doing something simple, a random walk in an explicit potential, and that is an object that we can that we can compute with. So if our system that I can imagine that I put my system in, in a starting situation where the magnetization is negative, and it wants to go and evolve towards a situation where the magnetization is positive because the external magnetic field is positive and it likes to align with that field. So, well, that that that is going to perhaps not happen so easily because the magnetization has to move from minus minus value to a plus value and there. In order to follow how long it takes, I might as well look at how long it takes to magnetization to move from a negative to a positive value. And so by looking at this evolution of this empirical magnetization, which is very simple, we can still track what the metastability is doing really in the space also of all the configurations by using the symmetry here. Now, fine, we're going to want to let n go to infinity, keep beta fixed. We are interested in a very large configuration and so what might happen. Well, what might happen is that in the limit as n goes to infinity. This empirical magnetization starts to perform a brownie emotion on that unit interval. So what what is a random walk on a very, very fine grid, as the grid becomes the converges to the full interval that the brownie, the random walk becomes a brownie motion. And it will be a brownie motion in a potential that is quite simply the limit of the free energy per vertex that we had before. So if I take this quantity and I let n go to infinity then this object here here's the independence. This will simply converge to something and where it converge to is a very simple function. Except I have taken out a factor minus log two, because this turns out to be redundant and it doesn't play any role so I'm just kicking out a constant in order to make the computations a little more palatable. And this computation also is part of the exercises how do you do that, and what is exactly happening. So the empirical magnetization performs a brownie motion on this interval in a certain potential which we will have to look at and I'll show you a picture in a second. And this is exactly the setting that Kramers in the 40s analyzed when he started to become interested in in major stable behavior and so he he looked at brownie motion in a double well potential and did computations. And so for the device model, which comes from interacting this the world of interacting particle systems, we are in the limit as n goes to infinity enter the world that the commerce looked at and and that has become a paradigm for for the analysis of made a stable behavior. So we're really having a very simple setting of what the magnetization is doing. Right, so it's time that I show you a picture of what this free energy per vertex at magnetization and looks like and if you, if you look at this formula and you, you look at it in more detail than it, then it turns out that this thing has a double well potential and it looks like this. There is a global minimum at a positive magnetization coming from the fact that if the system is positively magnetized. It is aligned with external magnetic field it really likes that and it has a lowest possible energy. If it has another minimum which we call the matrix table state which is a negative value so if spins are negatively pointing out they're not aligned with the magnetic field, but they're still very happy because they're all pointing negative, and they're getting a lot of interaction from each other. And there is somewhere in between a threshold that you can compute it's at some value and star that is the solution of a certain consistency equation, and that is somewhere in the middle and that also happens to be still negative. And it is because H is possible H minus H is the slope of this line at the origin. If H would be zero then you would have perfect symmetry in the system and you would get a double well where this bump here would be just sitting at the origin, and you would get a completely symmetric well and it is because H is not equal to zero that this is asymmetric and that there is a major stable state and a stable state. I'm still recalling the question that Peter Winkler asked in the first lecture. Yes, we could also start the dynamics here and see what how much time it would take to go over the hill to the other end that we can do too but this would be, it's not the classical situation of hysteresis and metastability that one usually looks at. And you must realize that there is a quadratic term, but this entropy plays a very important role. And the precise form of this double well potential is very much also dictated by this entropy so if you give me a value of the magnetization, you know, with every magnetization corresponds a certain number of spin configurations and they, that number needs to be kept track of because the that will matter for how the spin configuration is going to evolve so we're having this setting here, and I can now state the, the theorem as we know it and this will be after this theorem we're going to take a brief break. And the theorem says, if you are, if I'm going to take sets of configurations for which the magnetization is close to these minimal magnetization so this is not a bold face these are numbers in the unit interval. If I'm going to start by starting from a configuration set of configurations for which the empirical magnetization is really close to these two numbers. Then, in the Curie Weiss model the time that you cross over is given by a formula that looks of the type as we've seen it before it's a pre factor, and it's e to the n times gamma and this gamma is exactly the free energy height that the system has to go over times beta. So, if you have your curve and this FPH is completely explicit you just look at what this barrier is you multiplied by beta and that is what your gamma is your beta can be any number larger than one, because one is the critical value for which the face transition would disappear. And so this is a very natural thing in the in the main to study the behavior and there's a pre factor that turns out to be. Well, not so easy and this was already found by by Kramers and ironing that it depends on the numbers on these numbers here and that number there the m star and the minus. And what it really what plays a very important role is the curvature of this curve at these two points. So it's the curvature here and the curvature here that also determine whether you know what this pre factor is and the flatter the curves, the longer it's going to take to go over. And there's a little observation but very important observation, the age should not be larger than a certain threshold. And this threshold depends on beta and it's it's it's very simple. And I made a picture here of what this curve looks like. And it means that whenever you are in the regime where there is a face transition so beta is larger than beta critical which happens to be one here, then you may not make your age exceed a certain curve, which, which has this shape, because otherwise, the magnetic field is so strong that you would not have made the stable behavior the system would be very rapidly making the crossing, and you would not be in the in the matrix table regime. So it's a very nice and happy ending for the Q device model there's so much symmetry that we have a complete description of the crossover time with computable quantities and we already see here again that this pre factor is is a somewhat non trivial object that you have to take into account. So there is something to do with what is happening here and here in this curve so there is something, again, playing a role that we need to keep track of. Okay, I propose that we have a very short break here. And, and then I will continue with really tackling now the, you know, the most important thing for this lecture where we really turn the graph into something. Okay, are you willing to take questions now. Yep. Okay, so if there's questions before the break. That's a good time to pose them. If not, we can take a, a wait a bit, but the other possibility is we take a break and people can stare at those pre factors and and come up with questions. I would say that there's a nice exercise that the Frank and Elena prepared that deal with some of the capacity calculations which help you see where this pre factor comes from. So Frank again you've been so clear that people are understanding everything. Okay, well this this lecture is also conceptually a bit easier than the previous one. But something nice is coming up with the others rainy graph you'll see, of course. I have a question, I may ask. Yeah, please. Regarding the parameters that you mentioned right now, in particular, or beta or H. And the, the carcature the portion of that curve, is there a way to predict or assume complexity in some intervals. I mean, in other words, thinking about energy and entropy exclusively. Is it possible to assume that within certain constraints. There is these parameters, I mean this condition here that you cannot make your age too big which is captured that you have to stay under this curve this curve is not not convex it has a particular shape that that is needed in order to get. Yeah, in order to get your your double well potential if you are not having that condition then this curve will not look like a double well. It, it and and so you would not have made the stable behavior. Okay, so but below the M right below the line. No, this line. No, no in the previous the one that you have the caricature picture. Yeah, right there right in that feature below the line. And I'm going to mark this M for my magnetization and soon you can see that the two minimum points. Right. Yeah, they tend to have an appearance of convex right. Yeah, they are absolutely. And this also the convexity also means that's a good that's a good point, because it means that these this number, these numbers here are both non negative. Because this has a positive curvature this has a negative curvature since there's a minus in front of here. I wrote it explicitly like that. You know to say this is a product of two things and and and really there there is curvature at these points in this curvature plays a role in in the in the matrix table behavior. Right. So indeed, there is convexity and there's concavity here. And in fact, there's a there's a there's a twist over point and and so there's much more. You can say about the about this picture. In fact, it's a very nice function. Right. Okay. Thank you. Okay, maybe now we will take a real three to four minute break. I think it's always a good idea. Yeah. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Frank, I think maybe people are back and you can tell us what happens with random graphs. Yeah. Okay, so now we're going to really bite the bullet and say what happens when we're going to move away from this complete graph where which led to all high degree of symmetry which where we could do this limping lumping sorry. We're going to look at the air there's rainy random graph. So this is the, you start with a complete graph and for every edge, you decide to keep it with the probability P, or to remove it with a probability one minus P, and P will be some fixed number between zero and one. What you then get is a, you're basically doing edge percolation on the complete graph. And now you have diluted your graph because you only keep a fraction P of the vertices. And what we're going to ask is, what happens if we do the exact same model on this graph. And because now some edges are missing, it means that some of the spins are not interacting with some of the other spins and in fact, we only keep a fraction P of the edges so the interaction has been randomly diluted. So what can we do here. The very first moment, the diverse first thing that you realize when you get yourself in this setting is that you say this idea of looking at the magnetization it's gone. Because if I would follow the magnetization, I would have to conclude that this is no longer a Markov process, because it does not capture the, you know, it doesn't capture the information. So if I just know how many spins are up and how many spins there are down, I do not know the energy, because I do not know where the, you know, where the, which spins are feeling an interaction which not. So lumping is, is gone. It's no use to look at the magnetization anymore and it's becoming much harder. Still, you would think well if n gets very large then probably this graph should be, you know, sort of homogeneous so, and you would say well the missing edges are probably distributed in a very homogeneous way so maybe magnetization, even though it's not Markovian perhaps is still close to Markovian question marks and we will, we will have to play with that idea, but we have to come up with something really new. What we're going to do is to immediately give you two main results that we know for this in this setting, and then only later tell you a bit about what it takes to get these results, and it will have to do again with questions about what what would this critical be how could we understand it what techniques would we go about and I'll say a few more things about about that. Now I first want to mention that there were two parallel projects going on in the in the fall. Anton and Saeda and in a were working on this problem from one perspective in fact they were very much playing with the Dirichlet principle and the Thompson principle. And again we're playing with test functions and and and we're thinking well if we just take the old test functions of the Curie Weiss model suitably diluted. And so it does work and indeed it does. It does work. And at the same time, I was working with Oliver Jovanovsky on on trying to use capacities to to to to sort of control the dynamics of this magnetization and perical optimization, even though it is not a mark of process so we could not immediately use the standard tools and and along two different routes we came to results that are that are nice and intuitive and also complementing each other nicely so it was. It was a very nice thing for us to be talking as a group of five also to hear what the other ones are doing. Nice, nice time. Here's the theorem the theorem I derived with Oliver is rough in some sense and rougher in some sense and finer in some other sense and I will explain that to you. So, we are going to take J, not equal to one over N but one over P and now, and that's because a spin is only interacting roughly with P and spins. So, the right normalization should now be P and one over P and and not one over N. We're going to stick to beta being super critical, the same as you have for the complete graph. H should not be too big, because this represents a metastable regime. And what we were able to do is to compare the metastable crossover time on the air there's rainy random graph with a slightly changed interaction parameter one over P and rather than one over with the formula for the device model. And so this statement here says that these two things are very similar, except for a certain factor, which is not so small because it grows like some power of N, but it's really small compared to what we saw in the QE rise model the leading order was E to the beat was E to the end times gamma. So the leading order term is exponential and then there's a polynomial correction and we were able to prove that this exponent cannot be too big. It's actually with probability one under the law of your orders rainy random graph bounded by something that is given like this in terms of the parameters. So there's a control over what the error is, but there is a polynomial correction term that we do not control. And I think you have to realize first that because the graph is now drawn randomly. The answer should really be random. And so the device model is a detainment has a deterministic scaling, and there is now a correction which we are able to say something about with probability tending to one under the law of the air there's rainy random graph on and vertices. There is a randomness now and we need to the this here is a statement about how, yeah, how small, or, or how big this error is under the law of the air dish rainy random graph. So, on the one hand, we're happy that we got a good result and the result was also nice because it was completely uniform in how you start, but we pay a price that there there is a rather well big ish error much smaller than the leading order term that that represents an error here and we were asking ourselves the question what what really is going on on the on the complete graph the pre factor is constant, we find that the pre factor is apparently random it's, we could not really compute it it's probably a very complicated thing. But at least we have some control over how bad it could be. And the theorem is again true in the same range of parameters as we have it for the QV rise model. And it's also uniform in the sense that it doesn't really matter what configuration close to this made a stable state you start out with the answer is uniform in in how you started that was a result of certain coupling arguments that that we did. So this left us with the question well this pre factor is complicated probably the the precise form of the critical droplet is complicated. And it is probably random. And because the others rainy grandograph is random. So what is really going on here is is this the best that one can do or or can one actually do something better. And in fact, one can do something better and this was a, this was the result of Anton and Saida and Elena. They proved that if you look at the quotient of the others rainy random graph, crossing make a stable crossover time, and that of the QV rise graph, it is actually tight. So the pre factor is not tending to infinity is intense infinity but it is stochastically tight under the law of the others rainy random graph, and they were able to to quantify this by saying well if I have a certain constants and if I want this thing we be be sandwiched to the pluses and e to the minuses with appropriate constants, then the probability that it's within that window goes to one sort of like like a Gaussian so it was a clear estimate of what this pre factor could do and there and it comes with absolute constants and there are certain constants that are there that are not so easy to compute but they're nevertheless, nevertheless, computable. And so this theorem is a considerable sharpening of, of what, what Oliver and I did, because it shows that this pre factor is tight. So it's not going to move to infinity is intense infinity, and, and apparently it's random. And, and there are some, there's with this there's some control on the tails of of that random variable. And it may very well be the case that this quotient is simply going to converge to around the variable whose law, we do not know but at least we have some bounds on the on the tightness. So, this, this is a big step forward in in really understanding this. This result. Also, this stronger result comes with a little price. And this has to be taken small enough. And you must let your dynamics start in a particular initial distribution which has a deep reason in potential theory it's called the last exit by distribution. And then formulas become a little bit nicer to work with. And, and, and you can, you can apply your, your computations a bit more sharply. This was. So this is now an a very interesting situation where we are beginning to understand something more about this pre factor, and an interesting challenge would be trying prove that under the law of the air there's rainy random graph this is going to converge to a limiting random variable. And, and can you say something about the distribution of this limiting random variable so that is music for for the future. A really hard, hard thing to do but also interesting to see that this pre factor in the air this rain around the graph which is still a very simple random graph model appears to be again a very delicate object of which we have some control by looking at the geometry but now it's not a matter of energy it's energy and entropy and understanding the full glory of this critical set is something that we at this moment of time are not yet quite able to do. So those are the two results and I'd like to say a few words about what it takes to to derive these theorems and this is interesting because it has to be it has been a challenge for for for both groups to really do that. So, Antonis said and and it and I have really followed the classical route of potential theory. They started from the idea that a large air this rain around the graph is probably very homogeneous. So, they said, Why are we not going to use the, the, the, the, the delish lay and the Thompson principle. So by inserting the same test functions as we have for Curie wise, corrected for this change in the in the interaction j going from one over and two one over PN. And then let's see if we do that computation. Do we get matching upper and lower bounds and does that work. So along the way you have to still fight with the non homogeneity of your ergers rainy random graph, but your test functions and test flows are simpler. I'm making a long story, much easier than it actually is but that is the, the sort of the core of their argument so they have to work with concentration estimates capacity estimates. The question is a couple questions so good moment yeah. So there's questions about, can you actually get convergence as opposed to tightness I guess you can get weak convergence of the subsequence. Yeah, yeah. At the moment it doesn't go beyond tightness we do not have convergence but you would have convergence along subsequences, but, but no control of what these subsequences would be. Yeah, no question is asking about exponential laws do you expect them to exist. Yeah, that's again a good question like for cover circuit dynamics I said I didn't mention that but indeed, there's always the exponential distribution in the background and we can prove that this is a very universal law it applies in most major stable situations also here even although I didn't mention it, you must do a little work here, because the set that you start out with is not a singleton. And so some work is being done and also Antonin and Seida and Edina are working on certain features of that currently. Okay, thank you. So, Oliver and I had a different route we, we, we wanted to try something without playing with the Dirichlet and the Thompson principle. What we did is that we said well there is this non Markovian magnetization, but we're going to try and send which it's two curious dynamics, in which I perturb the magnetic field, a little bit. So we were able to show that if you make the magnetic field, a little bigger, or a little smaller. So how much you do will go to zero as n goes to infinity. Then we were able through coupling techniques and capacity estimates and concentration estimates and everything able to show that the empirical magnetization on the air to rainy graph is sent which very narrowly between what the magnetization is doing on two perturbed versions of the of the Q device model and that's that led to, you know, a sharp description of certain things but with the perturbation we also lost something. And therefore our pre factor is much, much cruder, but for instance this easily allows you to prove this exponential law for for the metastable crossover. And there were some lovely things going on course grading techniques isoparametric inequalities all over the place. It is a technically rather demanding story. But what comes out are two lovely nice theorems that that are, you know cheeky to look at. And I have no chance of really going into the story but the essence behind all of this is that because the air to rainy random graph is in the dense regime there are many, many edges still. And that is very homogeneous and this high level of homogeneity plays a key role in in really being able to understand what is going on that you say it is really very much like a device model with some fluctuations that you need to control and that is hard work. And that's why you can really compare the the the the metastable crossover for these two models so well and that lies at the heart of of everything here. So there is homogenization really playing an important role. So I want to wrap up with a few more slides in which I want to very quickly show you some other models of similar type that that people have looked at related to random graphs. And then we will be done. The very first is where I do everything on the complete graph but I make my random field. I make my external field random. So I'm putting the randomness, let's say on the vertices, rather than on the edges. So the interaction is everywhere but I'm assigning ID random variables to the vertices, and then I run my dynamics. And this model, which is quite challenging to do has been looked at in two series of papers here in the original paper, they took a case where new is discrete. And then what was lovely is that they could still use a form of lumping, because they were just keeping track of where this magnetic field takes, let's say a finite number of discrete device maybe age can be two and three. And then you are working not with an empirical magnetization, which is a scholar but with an empirical magnetization which is a vector. And, and so you're doing still some form of lumping and that allows you to do various computations. And it's, it's so you can still use the lumping. And it turns out that you find something like you find in the curry vice model but with a pre factor that is really rather a complicated function of this probability distribution. And there was a paper by Bianchi, Bovier and Joffe, two papers where they then said, okay, if I if this, if this has a continuous distribution this magnetic random variables, then well the story becomes more complicated and you need to do a lot of work to get there. And this is again a success story in that case that people, you know, you can deal with it because you can still use the lumping techniques. And that is very different from from the model that we've been talking about before because there, there is no lumping. The fact that you have these random edges and a random interaction introduces, you know, intricate spatial structure that that you have to fight with. And, and this model here is is, you know, a lot of progress has been done and it was a big was a big fight to do that, especially the continuous distribution case. But yeah, you can fortunately still use lumping techniques in some form or other. So what else has been done. Well, we have looked at metastability in the sparse regime. So graphs where the degrees are not proportional to the number of vertices but are stochastically bounded and there's a very famous model called the spatial model, which generates a random graph with a prescribed degree sequences many of you will have seen this, you start with the vertices to prescribe degrees, and you put little half edges out of them, connecting up with the you know, exactly the number of degrees that you picked and then you start to randomly pair these edges and you generate a graph. And we have looked at what happens when you do a Glauber spin flip dynamics on this graph. So here we were looking at n going to infinity and sorry and beta going to infinity. And there was a first paper by Sandoz Dumas, started to look at this problem and doing some rough estimates and then over time, we were able to say something about the sparse regime. And it's very challenging in the sparse regime, there are many things that we do not know. We will again see something of the form that we have a pre factor and an exponential term, but it turns out to be quite difficult to get your results on these terms. And these terms are again, also very random because they depend on the realization of the graph. And in the sparse regime, there's nothing homogeneous, the homogenization is not going to work for you. So it seems. And you can even do Kawasaki dynamics on on random graphs of the of this type and there are a couple of papers where we have looked at this on random bipartite graphs so that you have two classes of vertices that the blue ones and the red ones and the blue ones only connect up to the with the red ones, but the red ones do not connect up and the blue ones do not connect up so we're kind of to community graph and in that case, it's possible to to also understand, you know, major stability phenomena and and this involves see a Mac that he Francesca Nardi and Sam Borst and Mattias Fragare. This is also an interesting thing where we are beginning to, you know, eat our way into this topic. But it is in general quite challenging to really compute the fine details of of what is going on. So here is my final slide with a take home message. It should be clear from what I've been explaining that pre factors of average made the stable crossover times are delicate objects for random graphs. For one reason is they are random themselves. And the other reason is that they pen depend in an intricate manner on the underlying geometry of the graph and if the graph is very homogeneous, we still have good control of it if the graph is not very homogeneous like in a sparse graph. We are not going to do much less about it. And so at that level, we are still struggling very much with what is going on. So if we take the cross over time divided by it mean that will have a an exponential distribution that is a very robust law that we are able to prove in many, many cases the delicate part sits in really controlling this pre factor and understanding what is really the geometry that matters and can we understand that I think this area is still very much at the beginning. It's been around only for a few years and there's really a lot that remains to be done. So with that I close and I thank you for your attention. Thanks Frank for another great lecture. So I think Sarah is going to let us unmute ourselves. And I hope you join me in thanking, thanking Frank.
Metastability is a wide-spread phenomenon in the dynamics of non-linear systems subject noise. In the narrower perspective of statistical physics, metastable behaviour can be seen as the dynamical manifestation of a first-order phase transition. A fruitful approach to metastability is via potential theory. The key point is the realisation that most questions of interest can be reduced to the computation of capacities, and that these capacities in turn can be estimated by exploiting variational principles. In this way, the metastable dynamics of the system can essentially be understood via an analysis of its statics. This constitutes a major simplification, and acts as a guiding principle. The setting of potential theory relevant for interacting particle systems is that of reversible Markov processes. Within this limitation, there is a wide range of models that are adequate to describe a variety of different systems. Our aim is to unveil the common universal features of these systems with respect to their metastable behaviour. The first lecture will be an introduction to metastability. In the other three lectures, we will focus on three examples in detail: - Kawasaki dynamics on lattices. - Glauber dynamics on random graphs. - Widom-Rowlinson dynamics on the continuum. Reference: Anton Bovier and Frank den Hollander, Metastability -- a Potential-Theoretic Approach, Grundlehren der mathematischen Wissenschaften 351, Springer, Berlin, 2015.
10.5446/55662 (DOI)
It has a beautiful structure which I will try to explain to you. Okay. As with every model, I'll have to set the stage. I have to talk about what is the configuration space, what are the key notations, what is the Hamiltonian, what is the reference measure, what is the metastable regime. And so that's what I'm going to set up. And we're going to begin by taking the plane and carving out of the plane a torus. And for now the torus, well the torus is finite. We assume that this is large enough and what large enough means depends on what I will be talking about later. So a large torus which we give it periodic boundary conditions in order to make it nicer. And the set of configurations that we're going to consider is the set of all finite collections of points that you place in this torus. So here a point configuration is a configuration of points in the torus that is finite and of gamma counts how many points there are. So that's the cardinality of this point set. And I'm using the symbol gamma rather than the omega because I want to mark that this is really a very different configuration space than what we have been talking about before. And we're going to imagine drawing disks of radius one around the center of these points. So what I get is that I get disks, unit disks in the plane whose center are captured by this finite configuration. And that is our configuration space. The number of disks can be any finite integer starting with zero. So the box can also be empty. And that is the set of possible configurations. So that is the first thing. And then we're going to build our Gibbs measure. There is going to be a reference process. And the reference process simply will be here the Poisson point process with intensity one. And so that is the process where you throw in with intensity one points into your torus and you get a random point set. And that is our reference process. So that is what the disks would be doing if there would be no interaction at all. And then we're going to introduce an interaction Hamiltonian. We're going to say, well, what energy do we assign to this specific configuration gamma? And what we do in the random random model is you take the union of all the unit disk centered at the points in this point configuration and take the volume of that. And you subtract from that the sum of the volumes of all the unit disks. Now B is a unit disk, so this volume here is pi. And so what the Hamiltonian is, it is equal to minus the total overlap between the disks. So this is the volume of the union minus the sum of the volumes. And so the difference is minus the total overlap of the disks. And so this Hamiltonian makes the model attractive. The more I make the disks overlap themselves, the lower the energy becomes. So this Hamiltonian favors disks lying on top of each other. Now a perfect lying on top of each other of all the disks is very unlikely because the reference measure is a Poisson point process. And so it happens with probability one that the centers would all be lying on top of each other. So these disks are always a little bit spread out. And the closer they want to come to each other, the less likely that is for the Poisson point process to happen. But the more likely it is because of the Hamiltonian liking overlap. And so then there is the usual Gibbs factor. There is the inverse temperature in front of it. There is one more ingredient in here is that we are not fixing the number of disks. We allow the number of disks to be random, any non-negative integer. And we're going to weigh the presence of a disk with another parameter z, which is called the chemical activity. So when you want to bring in a disk, you have to pay a price z. And that's why this Hamiltonian is called the grand canonical Gibbs measure because it doesn't fix the number of disks. There's an interaction between the disks. There's a reference measure and there is a chemical activity. The z parameter is a way of controlling the density, the typical density of the disks in this Gibbs measure. And then as always, there's a normalizing partition function that turns this into a Gibbs process. So this is the Widdem-Rounesome model as it was introduced by these two gentlemen. And it describes an equilibrium distribution of disks. And it turns out that this system has a phase transition in, depending on the values of z beta and beta, which I will come back to in a second. So this defines our model and puts us into place. And what we're now going to do is going to introduce a dynamic. So we're going to add to the classical Widdem-Rounesome model a dynamics. And we'll do it in such a way that this grand canonical Gibbs measure is the reversible equilibrium of our dynamics. Now we're going to do something that is very close to the heat path, to the metropolis dynamics that we did in the previous two lectures, but only slightly differently. I will come to that in a second. I'd like to point out that if the torus would have been the infinite plane, then we know that there is a phase transition along a certain curve. And the z is equal to beta times e to the minus pi beta. The pi is just the volume of a surface, just the surface of a single disk. And it is known that there is a, if you draw this curve, that starting from a certain value beta critical, there is a phase transition curve in this plane. And it means that if you put your value z above that plane, you crank up the intensity of the disk so much that they will tend to sort of form a liquid. And if you are below this line, they will be much less dense and they will form what you might be calling a vapor. And there is a line in this plane given by this formula, except that it starts at beta critical where this phase transition occurs. And the proof of this was given only a few years after the Windham Rounds model was introduced in Ruel and Labouids and Galavotti. And then some 25 years later, Chase and Kotetski revisited this, proved many more things, also made a connection with the FK model. So in this way, the model really entered square into the world of statistical physics and many interesting questions have been dealt with since then. Now we are going to always sit close to a phase transition when we want to do metastability. And what we're going to do is we're going to place ourselves slightly above this curve, but start the system off in a situation where it is sort of vapor-like. And then we're going to ask ourselves, well, how long is it going to take before starting from a vapor state in a setting where the parameters really want the system to be in a liquid state? How long is it going to take until these disks sort of rain into this torus and turn from a vapor into a liquid? And I have to be much more precise about what that exactly means, but that's what we're going to do. And in fact, we're going to sit just above this line and we're going to go down to the right. So it's going to be some kind of low temperature and as you will see, high density limit because the combination of that curve will force you to be in that regime. But we will get there as we go along. So this is still the equilibrium setting and there is no closed form expression known for this critical value, not even good estimate. So already here you see that finding this critical value is not a trivial thing in the continuum. Now I'd like to mention that the William Rowlandson model also has a different interpretation. So I said the Hamiltonian is minus the total overlap of the disk. So you could say, well, why is that? I mean, isn't that a bit peculiar? So what does that mean? Well, it's a perfectly nice interaction Hamiltonia, but it has an interesting other interpretation. Namely you could start out with the same torus, throw in blue and red disks that have radius one half and have a situation where you say, I'm going to have a hardcore repulsion between the two species and no interaction within a species. So if I would say blues may overlap, no effect on the Hamiltonian, reds may overlap, no effect on the Hamiltonian, but there's a hardcore repulsion, blue and red can never overlap. So a hardcore gas with no attractive interaction, but only hardcore repulsion. And if you then would draw a ball of radius one around these red balls of radius one half, then the hardcore repulsion says the centers of the blue points cannot go into these spheres. And we would call this the halo of this really red disk. And it so happens that if you look at the right picture, you close your, you put on a pair of glasses that only sees red disks, the blown up red disks, and it doesn't see the blue disks and you integrate out over the blue disks because you don't see them. Then you exactly get the wooden rounds and Hamiltonian because the more the red ones overlap, the more space there is for the blue ones to move around in. And so those configurations will be favored because of entropy. And so there's an exercise that we wrote down that you could see that indeed this hardcore simple two species model is an equivalent model for the wooden rounds and models. So there are two ways to actually look at the model. I wanted to mention this, we're not going to use this picture anymore. I will immediately work with the wooden rounds and model as it was originally introduced. Okay, so so far I've only been talking about the equilibrium with the moralism model as it was introduced. And now what we want to do is to add a dynamics. We'd like to make, we'd like to have a continuous time marker process on this state space of all these centers and unit disks around them. And we're going to make, we're going to consider a very simple marker process that allows disks to be born and to die and nothing else. So a disk either enters or leaves, but when it is in the box, it's not moving around. It would be great to also look at a model where the disk would be moving around, but we are not capable of dealing with that extra complication. So it's very simple. You look at this disk, you see disk disappearing, randomly you see them being born and we are observing what that picture is doing. And we're going to use a metropolis type of dynamics. In fact, we're going to again say you accept when a particle wants to enter at the position X in a current configuration gamma, we're going to see what is the energy if I would have added that particle minus the energy that it had before I added that particle. I'm not going to take the positive part now, but simply the difference. And that's why this is called not metropolis dynamics, but heat dynamics. And I'm going to accept that birth of a particle at rate e to the minus beta very similar as we did before. I'm going to however put a Z in front of this rate because every particle comes with a chemical potential Z. So we want to place that there as well. And we're going to say, well, particles can die at rate one and they do so independent of where they are, independent of whether there are other particles or not. So the death of particles is a very simple thing. It's the birth that is more difficult and that's when you want to enter a particle, you have to see what effect that has on the birth. And this difference is always non-negative, so it's hard to add a particle because it has to conquer new territory. And that means that the overlap is less. And that's why it's difficult to bring in particles. For instance, throwing in the very first particle is already a hard thing to do because you immediately create a volume that is the volume of a disk. So the first particle has only a rate Z e to the minus beta pi. To enter in a beta would be big. That's not easy to do. Okay, so that is our dynamics. And here's the generator. There's a birth and there's also a death when particles are being removed. And this is the usual notation for generators. And what is very important to realize is that this dynamics, as it is chosen here, does have the Wittem-Rowlison-Grant canonical Gibbs measure as its reversible equilibrium. So if this is the way in which particles die or can be born, you will indeed have an equilibrium that is the Wittem-Rowlison model. Okay, now remember that we want to talk about something that involves condensation. And so the way we're going to model this is as follows. I'm going to consider two very special configurations, namely empty box. And this will symbolize the configuration where there is no particle inside the torus. The torus is entirely empty. And a black box, which will denote any configuration such that the disks completely cover the torus. And we want to think as mathematicians of this as a vapor and this as a liquid. And that's a fair thing to do because really it describes something that has a really very low density and is something that has a high density. Now we're going to start the dynamics in the empty set. So we're going to, in mathematical terms, we're going to prepare the system in a vapor state. We're then going to take a parameter kappa between one and infinity times the critical curve. So that was the picture that I said before here. We're going to sit somewhere above this curve. So even though we start in a very vapor-like state, the system in equilibrium would like to be in a liquid state. And then I'm going to ask how long does it take for the system to actually fill the box? And when it has done that, you say, well, I went from a gas state to a liquid state. I have condensated and we're asking how long does that take? And we're going to fix kappa, any number between one and infinity. We're going to sit a factor kappa above this critical curve. And then we're going to let beta to infinity. And that will be the metastable regime that we're dealing with. So we make a very specific choice of what parameters we want to take. And as you will see, this is a low-temperature, high-intensity limit. And I'll show that in a second. So our question will be, in that limit, how long does it take to go from an empty to a full box? And that's our mathematical caricature of condensation. Now, for the particular choice that we're making here, where you say my z is kappa times zc, and I recall that zc was known in the rhythm rather than sumo, it's beta times e to the minus pi beta, pi being the surface of a single disk. If I substitute this into the grand canonical Gibbs measure, I see that the Gibbs measure keeps its reference measure, which is the Poisson process. The z to the power, the number of particles will become kappa beta to the number of particles in the disk in the torus. And the volume here, so if I take that factor out, what actually is left then from the Hamiltonian combined with the nth power of this term is that really what you get is the volume of the halo of this configuration. And let me go back to this picture here. You have a bunch of points, you have unit disks around it, they overlap, and the halo of this configuration, which we call H of gamma, is just the set that is being covered by the disks. And the volume of the halo V gamma is the volume of the covered set by the disks. So in the end, in the particular metastable regime that we're looking at, we find that our grand canonical measure is given by this, where this is the volume of the halo of the disk configuration. This counts the number of disks, and since beta will be coming very large, we're really talking about a very high density regime where many disks in the torus will be favored, and that has to do with the fact that really the system wants to condense. So we're favoring condensation, but we're still starting from something that has a very low density. And I'm about to go to the break. The Dirichlet form, which plays a very important role, as you've seen before, because of the Dirichlet principle, the Thomson principle, can be written out as follows. There's a test function, I don't call it a phi, but an f again to mark that we're living in a different world. There's an integral with respect to the Poisson reference measure. There's an integral over where a next particle can land, and then there are these factors here coming from this term here in the Gibbs measure. It so happens that you have to evaluate this at gamma plus the extra particle that you added x, and then there's the square of the difference of your test function. So here's a very explicit Dirichlet form over all point configurations and extra particle, the volume of the halo that you have, the number of particles of the halo that you have some parameters, and this is the quantity that you have to work with. And this is a high dimensional integral that we will have to understand in order to make our way to a description of the metastable behavior, because as you know, minimizing the Dirichlet form will give you the capacity, and the capacity is the key to understanding the metastable cross over time. So that is the setting that we're dealing with. And here again, I'm very, very quickly throwing old formulas at you where you say the capacity is just obtained by minimizing your Dirichlet form subject to the right boundary conditions on where you want to start and where you want to end up with. And the mean metastable crossover time is given by a formula that essentially only involves the capacity. And here is something very trivial, which is this is some constant, because if there's no disc around, there's no cardinality, there's no halo, and this is just something very trivial. So it's all about the capacity, and it's all about understanding this, this variation of formula, and that's what we are going to do. So I'm going to stop here. And after the break, I will tell you what the heuristics is that we expect to happen, and then what the three main theorems are, and I'll tell you a little bit about what their background is, and having a little bit of an outlook on the future. So let's have a break here, Ed. Okay, thanks, Frank, so there was a mini discussion going on about whether or not beta C is the critical value for percolation, that is infinitely many discs as opposed to a complete phase transition. No, no, it is not. I mean, it's much more complicated because there's really interaction between the discs going on. It's not just the threshold for percolation, because the discs interact. Right. Okay, are there questions for Frank before we go to a real break, which we will? So in the phase diagram that you had, is it clear that... So you have this picture of the critical Z being unimodal. Is it easy to understand? Well this is... The only liquid and vapor on one side of the... Yeah, well actually it can be understood more easily if you go back to this double picture, where you say there would be a chemical activity for the blue and the red, and then the phase transition occurs when those two chemical activities are the same. And then once you go from this picture where the phase transition is just aligned where the two chemical activities are the same, because there's no beta here, because the only interaction is hardcore repulsion, if you then go to this picture here where you integrate out, you see that equality between the chemical potentials boils down to exactly this formula here. So the computation behind that is not easy, and it's not the difficult at all. It's very straight forward. The only thing that is difficult is that the phase transition really only happens above a certain critical value, and we have no closed form for that value. There's some bounce, rough bounce, the Reudre has some bounce on that, but it's not really no, no, no explicit form yet. So Kenlin asks, he says it's a stupid question, but he wants to know how we interpret the birth and death of a particle in from physics. That's again, no, that's a very good question. And it's again related to what I was talking about when I was talking about Kawasaki Dynamics. You may think of your torus as really living in a big reservoir, and this reservoir has other particles around, and these particles may move into your box, which is in this case a torus, or move out of it. So the birth and death is something that reflects the presence of a reservoir. Of course, a true reservoir is different. I mean, the distorus has periodic boundary conditions, we did that for mathematical reasons, and then you would say, well, a particle would sort of migrate into it. So particles are start to move, whereas in your model, we're not making the particles move. We do that because we don't know how to deal for now with the model with the motion. So birth and death is a way of mimicking the presence of a reservoir. And if you want to condensate, these discs have to come from somewhere to really do the conversation. So this is a way of modeling the reservoir, but the reservoir is sort of gone. It's replaced by parameters, and we've made it even a periodic boundary condition just to make life nicer for mathematical reasons. So a good mathematical, an honest to good mathematical way of modeling the effect of a reservoir. Fardad is asking whether it's obvious that the partition function is well-defined. Fardad is not a problem because we're in a finite set, this Hamiltonian, if you add more and more particles, it's going to really make it small. So indeed, if you sum over all possible values of gamma and you start to integrate, this is indeed a finite thing. In fact, this partition sum can be computed in closed form, and it doesn't suffer from any of the problems that it would not exist. But that's because we're dealing with a finite volume, which people were forgetting in the chat. Maybe we'll take a real break if Frank a chance to stretch and the rest of us as well, and come back in three to four minutes. Thanks. Thank you. Question from Omar about the two color model, whether or not you include overlaps there or whether it's just the one parameter. In the two color model, the only, so that's a model where there's no attraction between the disc, there's only hard core repulsion between the red and the blue, so that's a really hard core gas with no attraction. And if you then close your eyes and you don't see, close, let's say, your right eye and you don't look at one of the species, what you see that the other species is doing is exactly the Wynum-Rouneson model where there's an effective interaction coming from the fact that you integrate out over the other species. And it exactly produces the Wynum-Rouneson model. Okay, there's an ongoing discussion, but actually there's a rather nice exercise there on Poisson thinning where you can derive one model from the other and it proceeds rather cleanly and quite nicely. Okay, Frank, why don't you start the second part of your course. Yes. Final part of the course. Yeah, thank you. So I want to talk about three theorems and working our way up to these theorems, we should first step back one second and say, what should we be expecting to happen? So we're having a system of particles, they have a tendency to stick together, there's this attractive interaction because the more they overlap, the lower the energy is. So I should expect that these particles are going to form some kind of droplet. Well, okay, let's see what that means. Now inside this droplet, particles are distributed according to a Poisson process because whenever you throw a particle inside a halo, I'm not changing the halo. And so a particle falling on top of already existing particles is not going to pay any energetic cost. And I know that the chemical activity is kappa beta, so that's a large number because kappa is a fixed number between one and infinity and beta is going to be large. So I'm going to see a droplet that is very dense inside, many particles are sort of overlapping. Now near the perimeter of the droplet, particles stick out and they are born at the rate that depends on how much they would stick out if they are being born. Because if you stick out, you're going to increase the halo, that's going to be difficult to do. So particles will try to sort of conquer what is around the critical droplet, but they will have a hard time doing that, at the same time particles are raining in at the high rate, kappa beta. So they are sort of bombarding this droplet and trying to make it grow. And it turns out that if the droplet has a radius r, roughly, well there is no such thing as a perfectly round droplet as a overlapping this, but let's, you know, for the sake of argument do that, then it will turn out that if r is small, this droplet has a tendency to shrink and if r is large, it has a tendency to grow. And how much it likes to grow or not depends on the curvature of the droplet. So the flatter a droplet is, the easier it is to grow because if you stick out and you're very flat, you're not sticking out so much. On the other hand, if your droplet is very curved, when you try to stick out a unit disk, it's going to have more surface sticking out. And there's a balance between at what curvature this droplet of roughly radius r has a tendency to grow or to shrink. And we will see that there is a very neat threshold value in this critical regime where that crossover is taking place. And that's kind of the rough thing that you're saying, well, we're looking for some kind of high density, more or less a circular droplet. And let's see whether that is true or not. In fact, I'm now going to come to the three main theorems that we proved for this model. And they are theorems that are very much of a metastability flavor like. And in order to do that, I need to talk about a parameter r. And this is supposed to mean something like a perfect disks of radius r, even though in our model, it's not possible to put unit disk on top of each other so that you get the perfect round disk. But nevertheless, this will be a good approximate argument. And there will be a function of r that is given by the volume minus kappa times the volume of the same ball with one radius less. And the one here comes from the fact that we are talking about unit disks. And if you plot that, it has a shape that is parabolic like this. It achieves a maximum at the value at the radius that is kappa over kappa minus one. And remember that kappa is a number between one and infinity. So this is a number larger than one and less than infinity. And basically, this is the energy, well, I would say the free energy of the configuration that would where the unit disk would form something close to a ball of radius r, a disk of radius r. And you see there's a certain threshold where there's the highest value. And if we start with an empty box, we are sort of saying there's no particle. When I throw in my first particle, which sooner or later has to happen, I have a radius one. And so this is sort of the metastable state. And again, if your droplet is very, very big and starts to basically cover your entire torus, that means that you're going to go down in energy somewhere around here. And this picture is very reminiscent of the barriers that we saw were appearing in the Kawasaki dynamics and barriers that were appearing in the Glauward dynamics on random graphs. And again here, this is the sort of the energy of a droplet of size r. Now with these simple notations, we can state in first in the picture what actually the critical droplet will be. What will happen is that you draw a disk of radius rc kappa. So that's this explicit radius. And you're trying to throw inside this disk something like kappa-beta particles, so order-beta particles. And what will happen is that the boundary of this collection of unit disks is not going to be a perfect disk. It's going to be a little bit bumpy near the boundary, and it is that bumpiness that describes the full set of critical droplets that actually form the barrier for the nucleation. And it turns out that this requires a much deeper analysis is that of the beta particles that are inside the box, approximately beta to the one-third particles are sticking out of the boundary, and all the rest is not seeing the boundary at all. They form a kind of reservoir that is there, and it is the particles on the boundary which are much less and turn out to be of order beta to the one-third, where the action is that tries to grow the droplet pushing outside, but they're also dying, and there's other particles coming in and trying to push this particle outside. So what is happening is that as long as you have created something that is smaller than this critical disk, your tendency would be to all disappear again and go back to the vapor state you try, and it's very hard to do many times unsuccessfully to create something like this, and once you've managed to create this, then the disk is going to grow and go over the hill. And again, you see here that the real critical droplet is not simple at all. It's not a perfect disk. It's something that is very bumpy, and this bumpiness represents an interesting geometry that you have to really deal with in order to get your hands on the metastable crossover time. And in fact, in an early paper, in a physics paper by Stilleinger and Weeks, they were trying to describe something like this, and they were ignoring all sorts of terms and said, well, what should be happening near the boundary is something that you might think of as some kind of wave, a random wave, and they were calling this capillary waves. And in essence, this is an attempt to understand surface tension. So what we are trying to do here is really building a rigorous mathematical theory for surface tension. What is happening at the interface between an empty region and a full region. So here is the first theorem. It is a refinement of what could be called an Arrhenius formula, and it turns out that in order to go from empty to full, you must create this critical droplet. And the time for you to do that will have two terms. There is a term of order beta. Beta goes to infinity. And there's a certain volume term here, and this is computable. It turns out to be a very simple thing. It's simply the value here at the top of this parabola. And then there is a correction, which turns out to be beta to the one-third times something and this term is a surface free energy. So this term arises because the critical droplet is not a perfect disk. It is a bumpy disk. And the bumpiness, there are about beta to one-third particles near the boundary, and this bumpiness gives rise to this correction term. So you could think of this as a kind of, just like we were talking before, there's a pre-factor in front of an exponential term. Well, now the pre-factor is a bit more serious. It's e to the beta to the one-third, so it's big. And we have managed to show that this term is also computable. It is, as a function of kappa, here it is, a very explicit form, times a certain constant, and this constant here is a whole world in itself. In order for us to compute this constant, we have to do lots of work. There's masoscopic and microscopic problems, and it's going into the nitty-gritty of what is happening near the boundary. If a particle sticks out, well, by how much is it sticking out of all the other particles? So what is the effect on the halo? And that depends on all the other particles around it, and so we have to come to grips with what that is. We even have to come to grips with the fact that you say there are some conditions on the center of this particle for it to be able to stick out of the other particles at all, otherwise it's not a boundary particle. So there's a whole beautiful and deep and exciting story behind this constant, which I'm sort of smuggling under the rug here, and the main message is there is a volume contribution and there is a surface contribution, and they come with different powers of beta. The power beta to the one third is interesting here. And there are explicit forms for these functions as a function of Kappa. Frank, there was a discussion that started with the question with SEMAC about the power one third. Is there a way to understand that at some intuitive level? And it has to do, at the end I will come to discuss what this would do in higher dimensions. If we go to higher dimensions, it will change. And the beta to the one third is really a two dimensional exponent here. And it has a lot to do with, you know, when I stick out, how much does that cost? How much is there then that I eat away from the boundary? But the total boundary is fixed. So it comes out of a certain balance, which is not very simple to explain, but I will show later that this one third is really would change if you go to other dimensions. So it is specific for the two dimensional setting. It's also specific for the fact that we're dealing with disks. Okay, now you can plot these functions as a function of Kappa. And you see that if Kappa gets very large, you're beginning to come very super saturated. So the time is all going to get shorter because you are the more super saturated is that the quicker you're driven through this critical property. But the main message is there's a volume term and a surface term. The volume term is simple and the surface term is very, very deep and very interesting. And it lead led us to many interesting things living in the background. Well, there's a second theorem here is an exponential law. I mentioned before I didn't show this for the other models, but the exponential law is a very universal law. You have to prove it. It's not for free. But this says if beta goes to infinity, the true crossover time divided by its mean is just going to be exponentially distributed and the reason is again very simple. You try to grow a drop, you fail. You go back to the empty box. You try again. And it's and the time that you finally go over is exponentially distributed because you have to try many, many times before you succeed. So the gas is very quickly trying to invade this torus, but it's not succeeding until it has created a sufficiently large droplet on which the rest of the particles can rain on and fill the box. So very much what you see in in conversation phenomenon. And the third theorem is also interesting. I didn't write a theorem like this in the other two cases, but it's also true. Theta 3 says, as you go from the empty to the full box, you must move through this critical droplet. So this critical drop that is not just any funny way of compensating, it is the way to condensate. And the way it is formulated, it says you take a ball of a radius a little less or a little more than this RC, and you look at the set of all configurations where the halo is completely contained in the slightly bigger ball and completely contains the slightly smaller ball. And this theorem says, if you are on your way from empty to full, so if you know that you have left empty and you will never come back, then with probability tending to one, you must hit this set before you have achieved your full condensation. So this critical droplet really is your gateway to the nucleation. There is no, yeah, of course you could fill your box by, you know, by row by row, but that is not going to happen. That's too unlikely. And in fact, we can make this little perturbation of this fear very small. It can go to zero as long as it doesn't go to zero like one over square root of beta. And so we have a very precise description of what you must go through when you want to condensate. So these three themes together tell you how long does it take. The law is exponential. And here is your critical droplet. You must go through it. It's the most difficult thing to do, but it's the easiest among all the difficult things to do. So that's what the description really is. Okay, I'm going to finish in maybe five or 10 minutes. As I said, the proof behind this is a beautiful and long story. In fact, the four of us are currently, and many of us have been referring to it as a beautiful nightmare. There are wonderful mathematical things coming our way, variational principles, isoparametric inequalities, some very classical ones, some more recent ones. In order to understand the leading order term, we need volume, large deviations. We need surface moderate deviations in order to understand the fluctuations of this boundary. And basically what we need to do is to take our Dirichlet form, integrate out over all the particles that are inside. We are then left with a high dimensional integral of particles on the boundary. We need to understand their distribution and there are lots of constraints. If a particle is not positioned well, it will not be a boundary particle at all. So there are constraints that we have to deal with. There's interactions, because where we put the boundary particles will determine what the weight is in our Dirichlet form. And we need microscopic descriptions. We need mesoscopic descriptions. We need all sorts of approximations, coarse-graining techniques, capacity estimates. It's a beautiful story, but a story that is a very long ride. And so there really is no way in which I could give you a quick fix on this. But along the way we encounter many things, sometimes we say jokingly to each other, it seems like you need half of mathematics to actually deal with this. And Morris being added all the time and it's a beautiful story, but we need a lot of courage to actually finish it. So it's very foundational stuff that is going on. Okay, I'd like to throw two more slides at you to say, well, okay, this was a story about unidisks in R2. What's happening when I want to go to d-dimensions? I mean, Ciamak already posed the question, where does the beta to the one-third come from? Now I'm going to state a formula here, which is for the moment nothing more than a conjecture. We have no idea really how to prove this yet, but we believe that what will happen if you go to d-dimensions and you would be using d-dimensional unit balls rather than unit discs in two dimensions, that your crossover time would again have a term that is of order beta with a function that is, again, well, there was something like pi kappa over kappa minus one, but the kappa is replaced by a kappa hat that is a power of kappa. It's the kappa to the power one over d minus one. So if d is two, kappa hat is equal to kappa and we get the old formula back and a is pi. And a here would be something simply related to the volume of the unit ball. And I didn't even bother to write that down. And we believe that there would be a correction term that would be beta to the power d minus one. Okay, sorry, this should be divided by d plus one. I'm sorry about that. And if d is equal to two, then that will be giving you the one-third. And there will be a surface term, which apart from some constant, which is probably very deep, will take a very simple form as a function of kappa. Again first in terms of kappa hat. And if you take the case d is equal to two, then this will kappa hat becomes kappa. This becomes one. This becomes kappa to the minus one-third and you get back the formula as we saw it before. So this is what we believe is the case. But there is a dream that one day we will be able to prove this. And with Yogesh, we have started to think about that. And maybe this part is still doable because it's the leading order term. And here the challenges would be much bigger. But it's nice to see that the scaling in beta and in kappa is going to really be dimension dependent. And we have even asked with Yogesh, and we're at the beginning of starting to brainstorm about this, what would happen if you would replace your unit disks in the plane, for instance, by a nice convex-compact set? Maybe this set would require some symmetries, maybe not. Can you first of all prove that there is still a phase transition? Yes, in many cases you can. And what would then be the scaling? And it turns out that, again, only conjectural at this level that the scaling with beta and kappa, so the various powers that come up, will depend on the shape of the set that you're taking. So if you say I take a ball and then I take an ellipse, nothing much is going to change. But suppose you go and make it a square, something that is not smooth at the boundary, maybe a triangle, then it seems that all these powers of beta and kappa are going to change. So if you qualitatively change the type of set, then you in general may and will affect the powers of beta and kappa. And this is a story that we're sort of trying to understand at this moment very roughly. And there is some hope that in the long run we will be able to repeat some of the things in this that we have done so much in detail for the planar case. So I conclude by saying at least for the unit disk planar case, we have obtained a detailed description of major stability for a very particular particle system in the continuum, the Windem-Rowlitzer model, and a very particular dynamics, birth and death and no motion. And we see that there is a volume contribution and there's a surface contribution. And especially the surface contribution has a great depth. There's a deep geometry story behind what the critical droplet looks like. And it's quite a long climb to do, but along the way there's beautiful mathematical tools that come along the way. And I think it is worth the challenge to really try and pinpoint downmate the stability in a continuum interacting particle system. So thank you very much for your attention. Thanks a lot, Frank. Another wonderful lecture. So Sarah is going to allow us to unmute ourselves. And so the first item of business is to thank...
Metastability is a wide-spread phenomenon in the dynamics of non-linear systems subject noise. In the narrower perspective of statistical physics, metastable behaviour can be seen as the dynamical manifestation of a first-order phase transition. A fruitful approach to metastability is via potential theory. The key point is the realisation that most questions of interest can be reduced to the computation of capacities, and that these capacities in turn can be estimated by exploiting variational principles. In this way, the metastable dynamics of the system can essentially be understood via an analysis of its statics. This constitutes a major simplification, and acts as a guiding principle. The setting of potential theory relevant for interacting particle systems is that of reversible Markov processes. Within this limitation, there is a wide range of models that are adequate to describe a variety of different systems. Our aim is to unveil the common universal features of these systems with respect to their metastable behaviour. The first lecture will be an introduction to metastability. In the other three lectures, we will focus on three examples in detail: - Kawasaki dynamics on lattices. - Glauber dynamics on random graphs. - Widom-Rowlinson dynamics on the continuum. Reference: Anton Bovier and Frank den Hollander, Metastability -- a Potential-Theoretic Approach, Grundlehren der mathematischen Wissenschaften 351, Springer, Berlin, 2015.
10.5446/55664 (DOI)
Alright, so... Okay. Wha... Okay, so, um, all right, so, so just to sort of recap, where, where, where are we before the break? Hey. Sorry, one second. There seems to be some sort of echo. Does anyone else hear that? Yeah, I do. Hopefully it's over now. Hopefully it's, hopefully it's gone. Um, all right, so, yeah, so just to recap, you know, you know, we studied the random energy model, but at least we started, you know, to get a hint or at least see sort of a simple version of this notion of replica symmetry breaking. And so, um, so what I like to do in this part of the talk is to sort of explain sort of more broadly the replica symmetry breaking system and sort of introduce the canonical models to which, you know, the story for today and sort of tomorrow's lectures will be about. And so, okay. So what we'll do is we'll start. Let me just start by defining sort of the canonical models in this setting. So okay, so what we'll be thinking about for, you know, this part of today's lecture will be a mean field spin system on n particles. So again, you know, as before the configuration space for the system will be the discrete hypercube and on the discrete hypercube will, will put a few, you know, different kinds of Hamiltonians sort of the first one we want to talk about is we'd mentioned this already was the Sherrington-Cripatrick model or SK model. And here, what we'll do is we'll have it's an infinite range two-body interaction. So here we normalize things so that, you know, Jij are going to be standard Gaussian random variables iid and zero one. And this normalization is, you know, one thing you should check is that the variance here will be of order n. You know, the idea being that, you know, each particle will sort of contribute an order one quantity and the energy itself sort of maximally will be of order the number of particles in the system. That's sort of why we chose this normalization. So the next class is what's called Derrida's p-spin models. So here instead of having an infinite range two-body interaction will have an infinite range p-body interaction. And then, you know, you could say, okay, well, why just stop at sort of two-body or three-body or five-body why not have, you know, mixtures of these kinds of interactions. And so you can think about mixed p-spin glasses which are just linear combinations of these p-spin models. Okay. So the story I'll sort of talk about today is supposed to apply to sort of this broad class of models. And we at least know it for sort of a large class of mixed p-spin models now. And sort of fortunately, you know, it's a large enough class that when it comes to sort of doing useful calculations when we go to sort of applications and other courses to other fields such as, you know, combinatorics, etc. You can sort of approximate any model you want by the class we can consider, you know, in a way that sort of doesn't affect any of the main calculations you're going to try to sort of shoot for. And if what I said seems extremely vague, it was sort of deliberately so. But feel free to ask more about that during the break. So, you know, the questions we want to think about again is, so can we try to understand the structure of the Gibbs measure, you know, whatever that means in a mean field setting? And specifically, you know, we want to understand it in this sort of low temperature replica symmetry breaking phase. And then after this, we'll talk a bit about the free energy for these systems. And we'll sort of end by discussing the phase diagram and sort of what's known about what systems are replica symmetry breaking and when they are, as you vary the temperature, and for that last part of the discussion, it will also be helpful to add an external field by which I mean we'll consider the sort of modified Hamiltonian, which is say the piece spin model plus n times an external field times the magnetization, sorry, which so first up is to try to sort of understand the structure of the Gibbs measure in the replica symmetry breaking phase. And so what I'd like to do now is start to sort of explain what the predicted and, you know, now known in a broad class of models structure of the Gibbs measure and sort of the idea behind this was introduced in a bunch of papers by Mazzard, Parisi and Vera Sorro and various colleagues as now sometimes called the Mazzard, Parisi, Vera Sorro picture or onsets for the Gibbs measure. And so their idea is that in the replica symmetry breaking phase, the Gibbs measure should decompose into what are called pure states. So okay, so that, you know, for those of you who are familiar with sort of other statistical mechanics models sort of non-dissorted systems, you know, it's sort of similar in spirit to this idea of pure states you see there but sort of very different in terms of how it plays out. And in particular, you know, one of the really, so first you have to sort of understand what you mean by pure states in the setting, you know, the standard sort of DLR approach to Gibbs measures won't work because you're studying a mean field setup. But sort of the really incredible thing is not just that it decomposes into pure states, but that these pure states, if you look at the sort of space of pure states, they say it should have this really incredible decomposition. And what they say happens is, well, okay, so if I draw my space of pure states, which is going to be a ball in infinite dimensions. So this is my sort of rough attempt at the space of pure states. In space, what will happen is you'll have your pure states. There'll be tons of pure states at low temperature. But these pure states are themselves going to organize into groups. In the following sense, you know, every point, so every distinct point in these groups, they're going to be equidistant from each other. And these groups themselves are going to decompose into groups. Where again, every point in the green groups is going to be sort of equidistant from the points in the other green groups. And in particular, you know, there's sort of hiding behind all of this is the sort of incredible branching structure. What I'm saying is, you know, at the highest level, the green groups, all the states in the green groups are going to be orthogonal to each other, I mean, are going to sort of be equidistant to each other within these groups. Sort of the Berry centers, which are going to be themselves, you know, equally separated. And then within these groups, you have a separation as well. So we expect is this sort of really incredible sort of clustering of states. You know, I draw this picture where the idea is, you know, every point, each of these leaves that are connected by a single parent will be equidistant, etc., etc. So what we expect is that, so this example is what's called three RSP. So there's three levels of replica symmetry breaking here. And you see what's happening is that you have clusters of states, which are within clusters of states, which are themselves within clusters of states. The number of levels of clustering corresponds to the number of levels of replica symmetry breaking. You know, the idea being, you know, if I, okay, let me just continue. And furthermore, you know, there's, you know, the picture is even more precise, you know, if I fix to a certain depth in this tree, and I marginalize over all the states at that depth, and I look at the fractional mass each step takes, what you find is that it fixed depth. The rank weights follow a Poisson-Dierre-Claire structure. And more precisely, they follow what's sometimes called the Derrida Royal Probability Cascade. And the idea of, you know, an RPC is, you know, just, you know, just as in the REM case, where you just had this point process of the largest weights, and then the true Gibbs measure was just the math was you just sort of renormalized by the weights. For the Royal Cascade, sort of very vaguely, the idea is you're going to assign independent Poisson point processes to each of these vertices in such a way that the masses at any given level will be a Poisson-Dierre-Claire process. And sort of, you know, as if this weren't crazy enough, one of the central predictions is that you can have a continuum of levels of clustering. And this is sometimes called full replica symmetry breaking. And then finally, they predict that sort of at the finest level, you know, under a certain assumption, which I'll just call the EA property, the states at the finest level are pure in the sense that sort of if you just look at that state, it's a replica symmetric state. Okay. So that's sort of a lot to digest. And so what I'm doing now for the next chunk of time is to sort of start to unravel, you know, this picture and what we know about. So the starting point is, you know, I want to try to understand, you know, what does this sort of crazy branching structure have to do with replica symmetry breaking? At least, you know, as we've started to understand it in the last lecture. But remember last time around, we said, okay, replica symmetry breaking should relate to sort of the behavior of independent copies of the Gibbs measure. And so what we're going to want to do now is let's look, you know, not just at two copies of the Gibbs, of from the Gibbs measure. But let's draw infinitely many IID copies. So Marius is asking in the chat if you could define what state means in this context of the original models that you defined. So we're going to get to that. That by itself is a very non-trivial answer, I'd say. So hopefully I'll get to this in like a few minutes. And Gurev was asking for which models this picture is predicted to apply? All models is predicted to apply. But I mean, actually getting exact results, I think, is part of the difficult story here for mathematicians. It's supposed to hold for all of these sort of canonical models that we introduced. These mixed piece spin models, the piece spin models, and the SK model. Okay. So, okay, so how do we go from this sort of crazy clustering picture to something about replica? So we had this picture where we had states, whatever, you know, following the question really or whatever that means. And then within these states, you have states. So what does this have to do with replica symmetry breaking? How can I turn this into a question about the replica? Well, what I'll do is one thing we could think about is I'll form the following matrix, Rn, of normalized inner products between the different replicas. So it's a doubly infinite array. And what you expect in this setting is, you know, after applying a permutation, the draws you get should be organized in such a way that when you look at the Gram matrix, the Gram matrix has the following very structured form. Where sort of on the largest off diagonal, you have one allowed value of overlap, which you can think of as zero. Within the sort of red regime, you'll have another allowed value of overlap. And this allowed value will happen sort of in all of these regions here marked in red. And then at the finest level, you know, except for the diagonal, of course, you'll have yet another allowed value. Right so, you know, a replica symmetric picture is supposed to correspond to there just only being a q zero. So this infinite by infinite matrix you get is just on the off diagonals, it's just one number. And so you just have this huge matrix where, you know, in some such every entries, you can swap with any other entry. But when you have one level of replica symmetry breaking as we had in the random energy model, now you'll have two allowed values. You'll have, say, zero in the picture we have here. But it could be non zero if you have an external field. And then you'll have some other value of inner product. And then you could have, you know, in this picture here, we have two steps of replica symmetry breaking. We're going to sort of have three different allowed values in products. And again, the point being is that whenever you're within any one of these boxes, you have sort of the matrix should sort of look locally constant. All right. So okay, so what's going on here? So you know, behind all of this is the phenomenon of altimeter. So what do I mean by altimeter? If you have some metric space, it's called ultrometric. So for any sort of three points, x, y, and z, the distance between x and y satisfies the ultrometric triangle inequality. That's the maximum of the distance from x to z and the distance from z to y. This is to be contrasted with the usual triangle inequality where, you know, the distance is at most the sum. Right here, we're saying the distance is at most the maximum. So the simplest example of an ultrometric space is a tree. Whenever you think ultrometric, you should, you know, in the back of your mind, think about, think there's a tree sum. And more precisely, you know, if you let x denote the leaves of this tree, and on it, you impose the metric, which is just, so this is actually, sorry, a rooted tree. And the distance between x and y, what we'll do is we'll say it's the depth of the least common ancestor of x and y. And this will be my metric, maybe subtracted from this by the actual size of the tree. And so the exercise, so in the homework, you'll sort of go over an exact, a precise example of how you find an ultrometric space in a tree. But the sort of key points I want to bring up are the following properties of ultrometric spaces. So one thing to keep in mind is that an ultrometric space, all triangles are acute and isosceles. So what do I mean by this? You know, all the triangles, you have two sides, which are equal. But the two sides that are equal are also the longest sides. The next thing, sort of useful to note is that if I have two balls of say, radii r1 and r2, where r2 is bigger than r1, then the intersection, there's a dichotomy. It's either the empty set or it's the entire sort of smaller ball. Furthermore, if I have two points, x1 and a ball around x, and x2 and a ball around y, and if we know our priority that these are disjoint, then the distance between these two points is actually equal to the distance between the centers. And finally, this is sort of the last thing, the reason why I say when you think ultrometric space, you should think tree. Well think what you get is this. If you fix the radius, by property 2, if I fix the radius, the balls of that radius form a partition of the space. They're disjoint and they'll exhaust the full space. But it also tells you as you make smaller and smaller radii, these new balls aren't going to sort of lie willy nilly, they're going to be nested within the bigger balls. So in particular, if you have say a finite sequence of radii, then there's a family of partitioned, partition indexed by a depth k tree, where the family of balls form a partition, which is a filtration. In the sense that what you can do is, if I have, the picture is going to be, first let me take my largest balls, and then we'll say, okay, within those largest balls, there'll be maybe some smaller balls. And then what I'm saying is that these two balls, and this ball, if you can group them in the following way, you index to this level the red balls, and then you label the vertices at this level with the green balls. And the point is that this labeling is natural in the sense that if alpha is a child of base, beta, then the ball corresponding to alpha is contained in the ball corresponding to beta. So where are we going to see ultrametricity in our spin glass model? What we'll do is the following. As before, we'll look at this doubly infinite matrix of pairwise inner products. Well if you think about it for a second, you'll realize that this sequence lies in a compact product space, and so in particular, it's tight. So it'll have weak limits. And looking at this weak limit, what we find is the following sort of important result of Pinchanko, who showed that for a broad class of spin glass models, which I'll call generic, the probability, I'll put this in quotes, that the support of the limiting Gibbs measure, the probability that that support is an ultrametric set, when endowed with the sort of inherited topology from little l2, an inherited metric, this probability is one. Or more precisely, one thing to observe is that this matrix is weakly exchangeable in the sense that if I relabel all of my copies, all of my IID copies, it won't change the law of this matrix. And so then one way of writing this is just saying the probability that the inner product between the first two guys is bigger than the minimum of the inner product between the first guy and the third draw, and the second guy in this third draw, the probability that this inequality holds is one. And to translate from the ultrametric inequality to this, just realize that I've switched from distances to inner products. And so means become maxes and inequality split. So just a word, since there are some questions about this, what do I mean by generic models? So generic models are a class of mixed p-spin models where, so it's a mixed p-spin models, so it's a linear combination of these p-spin models. And the assumption is that if I sum over those p, such that the coefficient corresponding to that p is nonzero, the inverse of p, this should be in there. And the interest in generic models, as we'll sort of talk about in the next slide, is that while it is in principle a subset of the spin glass models, so in particular this doesn't apply immediately to, say, the SK model or the p-spin models, what you can say, for example, is by a continuity argument, the free energy of any mixed p-spin model, so for example the p-spin models and the SK model, can be approximated arbitrarily well by a generic models. So if I want to compute the free energy model of, say, the SK model, what I'm saying is that there's this sequence of generic models whose free energy is converged to the true free energy, sort of after the limit when it goes to infinity. So the reason why this is helpful is, you know, at the end of the day, the sort of the main sort of reason for introducing the replica symmetry breaking line thoughts in the physics literature was that it allowed people to start making computations about free energies, large deviation rates functions, but sort of hiding behind all those calculations was some form of ultrametricity. And so what this theorem is telling us is that, you know, we get this ultrametricity for at least a dense class of models in the space of models. So okay, so just getting sort of very briefly to the other question that was sort of brought up, what do we mean? So okay, so now we see ultrametricity, what do we mean now by states and pure states? Right, so the idea, the question we want to ask is, you know, how do we define what these states are from the Gibbs measure at finite n? And the idea is as follows. So the starting point is to show that for generic models, if you look at the sequence of Gibbs measures, the sequence is what we'll call approximately ultrametric. So a precise definition of approximately ultrametric will take, you know, I think quite a bit of time away from our discussion, so I'm happy to speak about it in the breakout sessions, but I just want to explain what it means heuristically, because that's sort of the key point. The idea is that instead of looking at the Gibbs measure, instead of trying to pick out sets, you want to look at sort of the effective geometry of the hypercube as sort of viewed from the Gibbs measure. And the idea is that with probability in the law of the disorder, which I'll denote by bold P, with P probability tending to one, there's a collection of, there's a collection of point of sets of clusters indexed by this AR, by which I mean it's the infinite tree of depth R such that each non-root vertex has countably many children. Now these sets are almost balls, they're almost balls with a certain collection of radii, and these balls, for these balls, you have these sort of the properties one through four of ultrometric spaces with high probability in G. By which I mean, so for example, if I think about the pairwise distance property, what I'm saying is if I take two independent copies that lie in the same cluster, the chance of that they're much further than you expect is very small. But if I pick two points in two different clusters, the chance that they're much closer to each other than you expect is also very small. So the point is that these sets, sort of from a geometric measure theoretic sense, are sort of acting like balls in ultrometric spaces, and then the states are then just the Gibbs measure conditioned on these balls. All right, so I think maybe perhaps now is a good time to take a five minute break, or a short break. Up next is just, we'll talk about sort of answering one of the other questions about when do you see what kinds of replica symmetry breaking in what models. Yeah, so if anyone has any questions in the chat, please type in their own number. Okay. It seems like Reza answered a lot of your questions. Okay. So in that case, maybe we sort of continual on a little bit. I think that's a good idea. We'll end maybe early. Sort of leave more time for questions. So, so the last thing I want to do is I want to talk a bit about the free energy. And then go from there to talk about what we know about the phase diagram for spin glass models. And so the starting point is what's what's called the per easy formula for the free energy, which is you know predicted originally by per easy in the late 70s. And then we have a sort of a improved. And sort of a sequence of works by Guara. And telegrant. For even models for models with only even components and then finally by a penchenco. And then we have a building off of work of Eisenman sins and star. Which I want to say is around 2007. What it says is that the free energy of, you know, any mixed piece spin model converges almost surely to a number which is given by minimizing a certain volume over the space of probability measures on the unit interval. And the functional that you're trying to minimize is now called the preasy functional. And so what it looks like is as follows. You take your measure. And what you do is you're going to solve two PDE's. Let me let me describe the picture with an external field. Just since we're going to want to talk about face diagrams in a bit. So you solve two PDE's. And you evaluate them both at a certain point in space and time. Now these PDE's so you knew. They solve a certain family of Hamilton Jacobi Bellman equations. So they solve equations of the form DT phi. Times, you know, the Laplacian in space on phi plus another coefficient, which is the CDF of that measure mu times the gradient squared. And the final time data you choose here is different for the two problems and same with. So here, you know, this code this choice of coefficient will depend on the model you choose. And so for example, in the case of P spin models. It's like P times P minus one times T to the P minus two. And the choice of F depends on the geometry of the problem so so for us for since we're looking at the spin glasses on the hypercube for you. So this is the the log of the hyperbolic cosine of X. And for V. It'll just be. You know formula relate to replica symmetry breaking well this the idea is as follows what we do is we look at the sort of the unique minimizer of P. We'll call it mu star, which is called. We'll call it the per easy measure proving the uniqueness is of course yet another challenge. It was an important result of offing her and Chen from 2000 14. And the idea is that the preasy measure is, you know, this minimizer is essentially going to be the limit of the overlap distribution. And where this is actually inequality in the case of generic models. And so then the point is, we'll say that a model is replica symmetric. If the minimizer is just a direct mass at a certain point. And the replica symmetry breaking means that the support contains more than one. So what happens up happening in the various models are as follows so let's describe the phase diagram. So first let's let's talk a bit about the sharing to and Patrick model. So here what happens is as follows. So on the x axis I'll be drawing, you know, the inverse temperature on the y axis I'm doing the external field. And there's a certain curve. Which is called the deal made a towerless line. And sort of what was originally conjectured is that the model should be replica symmetric here. And replica symmetry breaking here and in fact it should be full replica symmetry breaking here so you should have exponentially many. I mean, so you should have a continuum of levels of replica symmetry breaking. So the state of the art, as we know it right now is sort of there's an important result of Fabio tonnalli. And we showed that below the at line you are replica symmetry breaking. And more recently there's some work of mine with Ian Tabasco. And we showed that outside of an exceptional set that's compact and away from the critical fields. You have replica symmetry breaking above the at line. And full replica symmetry breaking is still, you know, a really exciting open question. And very recently there's been important progress by a finger Chen and shown who showed that you have full replica symmetry breaking at zero temperature. And the point is that you don't have finite finite replica symmetry breaking. You have at least infinite. All right, so but what happens for the peace been models. So here the story is a bit more complicated. So we'll sort of serve as a bridge for our lecture tomorrow, where we start talking about dynamics and algorithms. What's expected to happen is the following picture. So, so one thing you could do is you could plot the analog of the deal made a Taurus line. You know what we can show now is the analog of the deal made a Taurus line looks like this. And so in particular, it's not the correct phase boundary it's sufficiently low temperature. Instead what's supposed to happen is that there's some other phase boundary, above which your replica symmetric. There's another phase boundary where you're one step of replica symmetry breaking. And then finally when you cross this phase boundary your full replica symmetry breaking. And so what's known at the moment is that again below the at line. You have replica symmetry breaking. So we have a very high external fields. We know that the deal made a Taurus line is eventually the correct phase boundary. But what's going on sort of in this entire region here is still you know from a mathematical perspective, a real mystery. And we'll talk a little bit about it tomorrow when we start getting to questions related to dynamics. This is maybe a good point to stop for today. All right. Sorry, so many issues. There are some questions for you in the chat. So the pink line in the dairy dot piece been model was the the analog of the deal made a Taurus line for for the piece been models. So, okay, the question they're getting. So a lot of questions, let me try to answer them in order. So, okay, or maybe reverse order so is the conjecture picture on the right for any P greater than equal to three yeah. So this picture is supposed to hold for all P bigger than or equal to three. At least the results that are known are also known for all P greater than or equal to three. And sort of in general there's sort of a long discussion of a T lines and sort of their generalization for piece been models and when you know they're sort of the correct phase boundaries in this work of, you know, myself in Tabasco. So, in 2016, it's in PTR. Hopefully that one person asked at least for a pointer to briefly describe the at line beyond that, you know, I guess I'll just say this because it'll come up in tomorrow's lecture. So there is an explicit formula for the at line. The first thing is supposed to be the following. There is a certain in the replica symmetry breaking sort of picture, sort of replica trick story. There's a certain quantity that shows up. That depends on the points of the support of the Prezi measure and this quantity is called the replicon Eigen Valley. And the at line is the curve where this Eigen value is zero. And sort of a dynamical interpretation of it I'll sort of give a bit in tomorrow's lecture hopefully before we move on to another question. Let's just thank August for his two talks. You can all unmute yourselves and we'll have a king clap. So, after this, what we'll do is
Historically, mean field spin glass models come from the study of statistical physics and have served as prototypical examples of complex energy landscapes. To tackle these questions statistical physicists developed a new class of tools, such as the cavity method and the replica symmetry breaking. Since their introduction, these methods have been applied to a wide variety of problems from statistical physics, to combinatorics, to data science. This course will serve as a high-level introduction to the basics of mean field spin glasses and is intended to introduce the students to the basic notions that will arise in other courses during the Seminaire. On the first day, we plan to cover the random energy model, the ultrametric decomposition of Gibbs measures in p-spin glass models and the connection to Poisson-Dirichlet statistics. On the second day, if there is time, we will also introduce notions of free energy barriers and overlap gaps and their connection to spectral gap inequalities and algorithmic hardness results. Suggested Prerequisites: - Measure theoretic probability; - Point processes and their definition as random probability measures; - Basic notions from Gaussian analysis (concentration of measure, Slepian's interpolation inequality)
10.5446/55671 (DOI)
We are happy to have with us this week Andrea Montenari who will give a series of five lectures one each day. There will also be a couple of exercise sessions on Tuesday, Wednesday, later on, and the lecture by Leo Mjoln on Thursday. So that's the schedule for this week. A reminder that these lectures are being recorded and live streamed, so if you do not wish to appear in the videos or on the internet, please keep your camera off and your microphone muted. So we will have one hour lecture. There will be a short break at some point in the middle for people to have a chance to ask questions. There will also be more time afterwards to ask questions. The questions after the lecture will not be recorded, so people are more free to take part. So we do welcome discussion and if you have any questions during the talk, please ask them in the chat. So some of us will be monitoring the chat and we also have Michael Celand-Tano, who is also here to answer questions during the talk. So we have a Zulip forum where you can also ask questions after the lecture if you have some things that are unclear and will be answered as well. So Andrea, if you could share your screen at this point. Okay, so can you see? Yes. Thank you. Okay, so thanks so much. Thanks, Lea and Alex for organizing this and Luigi also, I guess. So this will be a series of lectures about, so the title is Me Feel Methods in High-Dimensional Statistics and Non-Complex Optimization. And okay, I want to describe some progress that was made over the last several years about algorithms and methods in statistics with ideas that come mostly from probability theory or physics. And this is a plan of the lecture. I want to start with some motivations, so kind of describe why people, even not probably, might be interested in these kind of questions and the connection with statistics, then describe a way to get exactly as if the optics for this problem be a Gaussian comparison in equality, inequalities, then pass to algorithms and in particular first-order methods and the special first-order method that is called Approximate Message Passing and how to get optimal first-order method for regression problems using this technology. So these first things will take, first four points will probably take about three lectures and then the last two lectures instead will be devoted to spin glasses and how the same type of algorithms can be used to optimize spin glass Hamiltonians. So I will first go over some background, then especially in P algorithm for optimization in spin glasses and connection to concepts in spin glass theory, in particular, a Parisi formula and how to get ground state of Schoen-Tonkir-Patrick and related model using these techniques. So let me start and as a premise, I decided that since I'll try to write, I'll use this tablet for writing, writing references will heat up part of my time, so I decided not to cite any references, any authors except for theorems that have a name and I'll post actually I sent already a sheet with the references and you can use that to look who proved what. I think that's more efficient. Okay, so my motivation or part of my motivation comes from statistics. So in statistics, you have what you have, you have a set of probability distribution p theta that is indexed by a vector theta that is in some set capital theta, capital theta is a set in Rd and so these are what are probability distribution on say some set, some measurable space chi and then or X and then what you have, you have data that are your samples Z1, Zn that are IID from one of these distribution on this class and let me write underline Z for the collection of all the samples and what you want to do well ideally, you want to come up with an estimator. So an estimator is something that gives me an estimate of the parameter theta that was used to generate the data. So this is a map from Xn to Rials to the D that takes underline Z and spit out theta hat of underline Z. So this is what is an estimator and how do I measure the quality of an estimator, what is my goal? Well, I measure it typically by computing a risk function. So this depends on theta hat in the sense it depends on the function theta hat and on the true value of the parameter, let me call it theta zero, to emphasize that this is true value and this is defined as the expectation of some distance between theta hat of Z and theta zero. And this expectation is taken when theta zero is the true value of the parameters. So this is the general setting of statistical estimation. You have samples and you want to estimate a parameter or a function of the distributions that these samples have been generated from. Now there exists one, let's say, canonical approach to this and the most popular approach is the following. You come up with a function, let's call it L, this is sometimes called a loss function. This is from the sample space times Rd times theta to let's say the non-negative Rials. And you want this function to construct it such that if you define the following object L of theta, this is sometimes called the loss function, you define the expected loss E under theta zero of L of Z and theta. So if you define then this quantity, then what you want to happen is that theta zero is the argmin of L of theta theta zero with respect to theta. So you want to construct a function script L that is an expectation and such that if you minimize this script function, this returns the desired parameter. If you have constructed such a function, then you can try to do estimation by, okay, what is called, in statistics is called M estimation in machine learning is called empirical risk minimization, but is you basically in the definition of L, you replace the population mean by the empirical mean. And then this is z i. So the expectation of this quantity, so this is, you can write it also, the expectation of a very empirical sample of L of Z theta. So if you have such a function now, if you construct this empirical mean, now what you do is that you define your estimator theta hat as argmin of L of theta. And this perhaps is constrained to theta as some set. So this is the summary of what statistics is about or a big chunk of statistics and machine learning is about. It's just a five minute summary. So obviously I left out some ingredients, but let me give you one standard example of this loss function L. If your probability as a density, you can take minus log of the density. And the rationale for this is that then if you compute L of theta theta zero, the population risk, this is minus the expectation of theta zero of log, theta of Z. And this you must realize that is the same as the KL divergence between theta and theta zero. I think it's like this. Plus a constant. So this is uniquely minimized as the desired parameters. So why this whole approach should work? Well, the idea is that if you have this space theta and your population risk is, looks like this. This is theta zero and instead your empirical risk LN is something that in expectation should be very close to this. In expectation is equal to that. So this is LN. And expectation is equal to L and therefore you might hope that for large, so you minimize it so perhaps you get this point for large, for large N will be very close to it uniformly. And therefore theta hat will be close to theta zero. So this was my summary of what statistics is about. Andrea. Yeah. Omega N, maybe why does it depend upon N? We have a question. So good question. So here I'm hiding a bunch of important things. So if you look at this figure, so this is a whole area of statistics that I don't want to enter in. But if you look at this figure here, as I said, the justification for minimizing the empirical risk is that you hope that the empirical risk is everywhere close to the population risk. So you hope that these two are everywhere close. But you might imagine that this is a little bit of a trade off between two things. On the other hand, you have N, for N large, they will be close everywhere. But on the other hand, if you're set of parameter capital theta, on which this lower case theta lives is very large, this will hurt you. The larger is capital theta, the less likely is that they are everywhere close. So you want the two to be close uniformly over the set capital theta. So because of that, sometimes instead of minimizing over capital theta, you minimize over a smaller set. And you can make this set larger and larger as the sample size gets larger and larger. This is not necessarily, this is not always necessary. And of course, will not be a main topic of this lecture, but just for, you know, be careful. I have wrote a constraint there. Okay. Yeah. Please interrupt me if there is any question. Okay. So this is the picture. Now the classical, the classical, you know, these recipes mainly come from, you know, statistical problems in which, let's see, this is classically, you think of D, the dimension in which theta lives as fixed and the sample size going to infinity. So this is what started, you know, at the beginning of last century, over the last, you know, 10 to 20 years, people, you know, looked and, you know, were very interested in high dimensional statistics. This is a set of problems in which instead, okay, both n and d go to infinity. But often, in fact, n is much smaller than the dimension. So this is kind of surprising that you can, you know, infer or estimate a parameter vector with the entries with the sample size that is smaller than d. You know, here the trick or the catch is that you kind of work in problems in which theta has an effective dimension, let me call it dimension of theta, that is much smaller than the sample size. So even if the parameter vector is very long, it's effective dimension in a sense is much smaller. And the canonical or the most important example of this is the cases in which theta, the parameter space is the space of s zero sparse vectors in our D. That is the set of vectors that have only s zero, you know, zero component, and you take s zero much smaller than d and s zero much smaller than that. So this is the whole area of compress sensing and sparse regression, high dimension regression, et cetera. So what I'd be concerned here in the next couple of lectures will be a case in which let's call it noisy high dimensional statistics in which d over n is over the one and also the dimension of theta, if theta is simple over n is over the one. All of these quantities diverge and they are all over the one. This is to say asymptotic. So let me give you a couple of examples of problems that one might look at. So let me start, for instance, one example number one. I will not treat this example, but it's an example that can be treated in this vein. You have your distribution, the sample z i are iid with the mixture distribution. So these are samples that are iid samples and with probability half they are generated from a Gaussian with mean theta zero and covariance identity and with probability one half are generated with mean minus theta zero and covariance identity. In one dimension this will look like this. So you have n samples of these types and you want to reconstruct theta zero. So Leo is going to give a seminar, I think, and is kind of problems that he looks at in the seminar, kind of a generalization of this problem. Example number two, sparse regression. So the example that I gave before is mixture of Gaussians, sparse regression. So this is a case in which my z i are y i x i, where let's say x i is normal zero identity and say y i is theta zero times x i plus noise, let me call the noise epsilon i, epsilon i normal zero sigma square. And a sparse regression because I can assume that theta zero is sparse. That is a constant fraction of a certain fraction of nonzero. A popular method to estimate theta zero in these cases is to minimize the following cost function. Here y is the vector whose height entry is y i and x is the matrix whose i-throw is the vector x i. So what is the rationale for this cost function is that, okay, if you want this is minus log p theta of z. So this is the maximum likelihood component and this is a regularization terms that promotes theta sparse. So these are two examples you can make up many other examples by changing, for instance, the details here. For instance, you can change the distribution of the noise epsilon and then look at what's called sometimes robust regression in which instead of minimizing the sum of squares, I minimize another penalty. Where rho is a function, okay, perhaps you, for instance, you can take rho to be the absolute value. So these are three examples. And again, we are interested in studying a symptotic behavior of these methods in these problems and I will describe in these lectures two methods, in the next two lectures, two methods. One is Gaussian comparison. Okay, so these are kind of classical techniques, but they've been revived in this context fairly recently and the nice thing is that they are really simple to apply and they are pretty elegant. And another set of techniques that I'll talk about is related to this approximate message passing algorithm and the advantage of this kind of method is that it is their algorithmic. In the sense that they don't only give you an analysis of the problem but also give you really an algorithm that solves it efficiently. Okay, so perhaps I'll take a little break as recommended by Luigi and check whether there is questions. Thank you. There are a couple of questions in the chat. Okay, one question. Do you want me to read them to you, Amber? Yeah, I don't know if I can read the questions. One question which was asked, which someone answered, but why is this example called robust? Yeah, okay, so this example that I mentioned here, why? In general, it's really called M estimation, but a special case of it, this was popular and robust statistics and the basic idea is the following. In what I described before, in the square loss, the point is that the square loss is very sensitive when you have a data point that is an outlier. The simplest example of this thing is of course the one-dimensional example in which you are given data points, let's say Z1, Zn that are IID with some theta, they are equal to theta plus epsilon i where epsilon i is some noise, and you want to compute the mean some zero mean noise and you want to compute the mean theta. And so what people do since the Gauss is you compute 1 over i and sum over i or before Gauss probably, Zi, which implicitly is minimizing sum over i of Zi minus theta square. So where this comes from is from minimizing this cost function, but something that you know very well is that the mean is very sensitive to an outlier. If you have a single outlier, this will hurt you a lot and so something that is sometimes better to do is compute the median for instance. And so this corresponds to changing the cost function, right? Instead of minimizing the sum of square, I minimize the sum of absolute values. So by changing the row, I can obtain more robust estimators. So this gives the median. I hope this answers the question. So here's one. We are not going to cover the Bayesian interpretation of the penalties. That's the question. Well, no, I'm not going to cover any Bayesian interpretation. Of course, you can think of, if you look for instance a cost function like the last, so you can give a Bayesian interpretation in the sense that this is the mode of the posterior when the prior, when the noise is Gaussian and when the prior is Laplace. But I don't think, for instance, the Bayesian interpretation is very illuminating in this case. And actually, I think it's misleading in this case, in this specific case. But we'll discuss something that is Bayesian nature in the lower bound, what follows probably tomorrow. Okay, I'm not sure exactly what this question is referring to, but it says absolute values equal to row, perhaps? I'm not sure. I think that was referring to the robust regression answers. Okay. Yeah. So here in this case, this is, I'm thinking of the sum of the row of zi minus theta. Yeah. Okay. So in this case, I'm taking row to be the absolute value function. All right. So today, I'll try to describe this approach based on Gaussian comparison. And the basic theorem is the following inequality. Okay, so this is known as Gordon inequality. And this is the following. You have two Gaussian processes. So these are indexed by some finite set. And so they are centered. And TMS are finite. And you assume the following. That, okay, the variances are equal. 0.1, 0.2, you assume that if you keep the first thing, the first index unchanged, oh, this is minus x. So this goes in this direction. For all T1 different from T2. Different or equal, doesn't matter. The third condition. And this is for all. And all T1, 2. Then if you assume these three condition twos, then I'm going to look at the mean over s, the max over T of xst. And then I'm going to allow me some shift xi. And the claim that this is, you know, dominates stochastically the mean over s, the max over T of yst. And by dominates, so this is for any set of xi. Xi are real number. And dominates stochastically, what I mean by this is that, okay, let's say that a. Okay, so this is garden inequality. I will not prove it. It's not a difficult proof. You know, it's something quite simple. You know, probably, you know, it's not that often taught something that is more commonly taught is, it's not often called taught perhaps because the maximum problems perhaps are not so popular, but something that is more popularly taught is Fernick inequality. And you can see that this generalizes Fernick inequality that you recover if you, you know, for instance, set one of the inner optimization problem to be trivial. So if you take the set T to be of size one, so the maximization problem becomes trivial. And therefore, this is inequality between, between minima or maxima that is the same. And you have the usual thing that if, if, if, you know, one of the two gaussian distances dominates the other than, you know, the minimum dominates stochastically. Okay, so this is the inequality, perhaps I should show it again, so the true condition to check are these ones. And, and, okay, so what I want to describe is how it's applied to one of these statistical problems. Okay, it's not complicated. It's just a matter of choosing things in the right way. And the convenient corollary is the following. Take two sets u. Okay, I'll take u to be in, in Rd. Rd, say, and b in Rn. Okay, and just to be careful, let's say that they are compact. And let's take a function q. That is continuous. Probably, okay, and then I'll define two quantities. Okay, first of all, let me take a matrix g that has element gij that are id normal, 0, 1. And two vectors g normal, 0, 1, i, i, d, and h. Okay, so I'm given two, you know, a matrix. The matrix will be n by d and two vectors g and h. Okay, and then I'll define two quantities, two max mean problem. And star, this depends on the matrix g. And this is the mean over u in capital U of the max over v in capital D of v g u. This is scalar product plus q, u, d. Okay, and the other one will be c star or, okay, let me call it b star. Of g and h, this will be the mean of the max of, and now I'll do something much simpler. Okay, so I have these two problems. And the claim is that, okay, let me put it here, the probability that I'll start. Okay, let me omit the argument. For any of you. Okay, so in other words, b is a stochastic lower bound to l. So this gives you one side further if the problem mean max is convex concave. So what I mean by this, I mean that this optimization problem is, so you see each of these optimizations problems, there is a part that is linear in u and linear in b, the first part. Okay, at least if you look at the first optimization problem, there is a part that is linear in u and linear in b. And then there is another part that is a function q, u, v. So you want this overall objective to be concave in v and convex in u and the two set u and v to be convex. Okay, so if that happens, then you have the reverse inequality. So the nice thing is that this basically, in the case of convex concave problem, if b star concentrates, this gives you upper and lower bound and the upper and lower bound will basically concentrate around the same value. So the typical picture here is that the probability that b star is bigger or equal to u, as I said, has a sharp threshold around the typical value and therefore if you use these two inequalities, so this is, let's say, the typical b. This is, let's say u. This is the typical b. So it has a sharp threshold around a certain b, in which it goes from 0 to 1, this probability, and therefore if you use these two, this bound and the previous bound and this other bound, then you get the probability also that L star must also concentrate around this lower case b. Okay, now what is the interest of this is that if you look at these two optimization problems, the first optimization problem is this part that is, okay, is this q, u, v that is general, but depends on u and v also the first, even if q, u, v is simple, this bilinear part in v and u depends in a tricky way on u and v. Instead, the second thing, the second optimization problem is much simpler because this is basically if you treat norm of v and norm of u as constants, it becomes separable in u and v. So it amounts to optimizing each coordinate of u and each coordinate of v separately. So if you forget, or another way to think about it is that if you forget the norm of v and the norm of u here, this problem is basically linear, the module of the function q, u, v. So the second problem is much simpler to analyze than the second. So I claim that this corollary is really, as I said, as a kind of immediate consequence of Gordon theorem. Once you come up with this comparison, it's just a matter, these are two Gaussian processes. You are optimizing two Gaussian processes that are indexed by vectors v and u. So there is a trivial technical difficulty is that v and u range over sets that are not countable, not finite and not countable. But OK, I chose my assumptions here in such a way to make this easy. I can discretize the sets capital U and capital V because they are compact and since q is continuous, I'm in good shape. So going from finite set to infinite set is not a big deal here. The only thing that we have to check to apply Gordon is to check that the covariances are in the right order. The covariances of the two processes are in the, this q plays the role of the constant psi. The only thing that I have to check is that the covariances are on the right order. And OK, I'll do this quickly, this exercise. So I have these two processes again. What I call x in Gordon theorem, this is v times g u plus u times h v. And what I called y is v times g u plus and OK, here is a trick, I'll add z times norm of u times norm of v where z is another scalar random variable. This z might make my life easy. Notice that this z doesn't appear in my optimization problem here because I wanted to keep things clean and I'm paying for this lopiness of putting the z here by this factor two. So this factor two is basically the effect of conditioning of z being positive or negative. But you understand that this z is not a big role, it's just a single scalar. So now I'll check that these two processes are in the right order as required by Gordon's theorem. And OK, so the way the calculation that you have to do is computing x u1 v1 y u2 v2 minus expectation of x u1 v1 x u2 v2. And OK, you see when you compute the covariance of u, you get a piece that comes from this guy and the piece that comes from this guy. And so those two pieces are, one is the piece that comes from the matrix term as the form of u1 v1 times, no, that doesn't make sense, u1 u2 times v1 v2. And the piece that comes from the guy is trivial, it's simple, u1 u2 v1 v2. And then there is the, you subtract the dx contribution. And again, there is a piece that comes from this term and the piece that comes from this term. And of course, they are symmetric, so they are simple to compute. And you get this one is, u1, the first one is u1 u2 times v1 v2. And the second one is v1 v2 from h times v times u1. Jump around, u2. OK, so if you put everything together, this difference is now magically, let's see what I should write, it's magically u1 u2 minus v1 minus u1 u2 times v1 v2 minus v1 v2. OK, and because of Cauchy Schwarz, these two are positive, so this is always positive. And then you check that this implies all the conditions that you wanted. OK, so this proves this corollary, the first part. OK, how do you prove the convex thing? This is just a duality argument. If the problem is concave convex, then this star is a mean over u max over v of v g u plus blah, blah, blah. And this is equal to max over, max over v mean over u. And here I'm using the concave convexity. OK, and this is, OK, this is, of course, minus of the mean over v, the max over u of v minus g u plus blah, blah, blah. And now this minus g matrix is really distributed as the g matrix. OK, and therefore I can get bound on the other side by this argument, by the same argument as before. OK, so this describes a general corollary. Now I want to describe how to apply it. And instead of taking the most general setting, I'll take the last one. So how to apply it. And, OK, the basic idea is, OK, I want to study, if you remember. So in the last, I wanted to do a sparse regression. And my cost function looked like 1 over 2n. And then there is y minus x theta square plus lambda theta 1. OK, OK. And what I'm interested in is, I'm interested in the minimum over theta over ln of theta. So this is not a mean max problem, but I can write it as a mean max problem, simply as the mean over, OK, first of all, let me do something. This is a first difficulty is that here y and x are correlated. So it doesn't take the simple form as we were describing before. But I can use the fact that y is equal to x times theta 0 plus a Gaussian vector. And I can define theta minus theta 0 equal u. And so I get that this is the mean over u in Rd of 1 over 2n. And sigma w is a Gaussian vector with IED, Gaussian coordinates, minus x u. OK, now this is very nice because now all the randomness, all the Gaussian randomness is in this matrix x that is exactly up to R scaling is just the same as our matrix g in the little lemma. OK, and now I want to write this as a mean max problem. And this is just simple. You write this as, oh, I forgot the max, max over b. And then there is 1 over n. And then minus sigma w times b plus lambda times, OK. So this takes now exactly the form that I described before. This is this piece is v g u. And all the rest is the function q. OK, and at this point I can characterize. So by using Gordon theorem, I can prove that the minimum concentrator under certain value. Now this is not what I'm really interested in. What I'm really interested in is computing questions, properties of theta hat of the minimizer. And the way I go about it is simply saying, OK, I'll define the minimum of this over a set S. And I use Gordon theorem to compute the limit of this value. And if the limit of L star of S is bigger, OK, I just use the fact that if strictly where this is the global minimum, this implies that theta hat is in S complement. So by choosing in the right way the set S, I can show that theta hat is in a certain other set. So for instance, I can take a typical thing that I want to prove is that theta hat minus theta 0 to norm square concentrate around some value. So converges for instance, in probability to some limit a. So I'll take S, the set of theta such that theta minus theta 0 square minus a. It's bigger than epsilon. So by proving that the minimizer in this set is bigger, I can prove that this convergence theorem is bigger. So this is the basic idea of how this is done. Now let me describe in five minutes, I want to try to describe the result, the typical result that you get here for the lasso. And I'll describe one precise theorem. I just stated. And so the setting is you take n and d goes to infinity and then n over d goes to some ratio delta, say. And I'll assume what I assume about theta 0 is that if I look at the empirical distribution of theta 0, this converges. Say w2 distance to some limit law p theta. And so now the limit is described by a set of solution of a set of equation. So I call theta star beta star. The solution is unique of these two equation, theta square equals sigma square plus 1 over delta expectation of eta theta plus tau z minus theta square. This is the first equation and the second equation is beta equal tau times 1 minus 1 over delta. Probability. Okay, let me write it this way. Theta plus tau z. Absolute value. So here these equations are with respect to capital theta that has the distribution p theta and z that has normal 0, 1 and the 2 have to be independent. So I'll not describe at the moment where these two equations come from, but you solve these two equations, this comes basically one way to derive it is to analyze this Gordon problem. Here, okay, there is one thing I didn't define. I was a bit quick here. So here there is another parameter here that I call tau lambda beta. Here I didn't define this function eta of x and u is a special function is x. Minus u, positive part times sine of x. So it's a function that looks like this. It's 0 between minus u and u and then it's linear outside the interval minus u. But I solved these two equations in tau and beta and then this will give me the limit value of the limit distribution of the last one in the following sense. One minute let me just state this thing. I look at mu hat lambda, this is 1 over d, sum over i, 1 to d, delta theta 0i, theta at i. So this is the joint empirical distribution of the true vector and the lasso estimate. And then on the other hand, let me call mu d lambda is the low of theta and theta at tau over beta star of theta plus tau z. And the theorem is that the probability that the distance between mu hat and mu hat d is bigger than epsilon is smaller than some constant of epsilon. It minus n, some other constant of epsilon. So basically, what is telling this theorem is give you the complete joint empirical distribution of the true parameter vector and of the lasso estimates in terms of the solution of these two equations. So you solve these two equations, this is very easy. This gives you two parameters tau and beta. And then you compute these two random variables, one is theta of course and the other is eta applied to theta plus Gaussian noise. You compute these two and I'm sorry here I messed up my formula. And then these two are closed in the sense that their distribution are closed, for instance, in Busser's time distance. OK, so I'm sorry I went over time, but I guess that's all for today. That's OK, thank you. So we will now stop the recordings. We will unmute all the participants to thank Andrea. And we will now stop the recordings and the live stream and have a question period. After that we will also have the paper tools so people can also chat informally for a while longer if they like. So feel free to unmute yourself to ask questions at this point or ask on the chat. You can prefer. There are a few questions in the chat. Do you want to look at them Andrea? Yeah, I was looking. Quick read, I'll do a get the PDF.
Starting in the seventies, physicists have introduced a class of random energy functions and corresponding random probability distributions (Gibbs measures), that are known as mean-field spin glasses. Over the years, it has become increasingly clear that a broad array of canonical models in random combinatorics and (more recently) high-dimensional statistics are in fact examples of mean field spin glasses, and can be studied using tools developed in that area. Crucially, these new application domains have brought up a number of interesting new questions that were not central from the viewpoint of statistical physics. These lectures will focus on these new questions: (i) Statistical questions: what is the accuracy or uncertainty associated to a certain statistical method? (ii) Computational questions: can we efficiently compute marginals of a Gibbs measure? Can we generate low-energy configurations? The following is a rough outline of the lectures: 1) High-dimensional statistics. General setting and key questions. The role of sharp asymptotics. Examples and general phenomena. 2) Message passing algorithms, and approximate message passing (AMP). Sharp analysis of AMP. 3) Optimal AMP algorithms. Connection with Bayes error. Connection with convex optimization. 4) Replica symmetry breaking. Parisi formula. Computational implications aspect 5) Optimization algorithms for mean field spin glasses. This course will be accompanied by exercise sessions.
10.5446/55675 (DOI)
Currently at an instructor position at NYU's Courant Institute and the Center for Data Science. He will tell us today about information theoretic limits in Bayesian inference with Gaussian noise. Okay, so yes, thank you very much, Leah. Thank you very much to the organizer for letting me speaking at this nice summer school. So today I'm going to talk about information theoretic limits in inference problem. And yeah, I wrote already a little outline. So first I'm going to give some definition and state a theorem. And then we are going to focus on two very simple toys model. That is the estimation of a sparse vector and the estimation of the rank one components of a noisy tensor. So get free to interrupt me if you have questions or exit torch. And okay, so let's go. So what do we consider? So the general setting in this presentation is that we consider a signal vector that we are going to write X that will belong to Rn. And this signal vector will be drawn from some probability distribution that we write p of X. And we are going to assume that this probability distribution has a finite second moment. Right? And okay, basically X is the quantities that we want to infer. But what we are going to observe is not X directly is that we observe the vector Y, which will be equal to some factor lambda times X plus Z that will be Gaussian noise. This is noise and the components of Z are going to be taken IID, standard, normal, random variable, independent of the effects also. This factor square root of lambda will be what we will call a signal to noise ratio. And so the Y that you have on the left is what you observe. So given Y, your goal is to infer X. And you see that if lambda is very large, it will be very easy to infer X because the signal component of your observation will dominate whereas if lambda is very small, it might become trickier. All right? Does that setting make sense? Okay, and there are many questions that you can ask in such a general statistical problem, but here we are going to focus on a very particular one, which is the question that we ask is what is the value of what we called the minimal mean squared error that we define like this. The MMSC, depending on the signal to noise ratio lambda, is defined as minimum over all estimators that is all X hat that takes Y input and outputs an estimate of X. So measurable function from RN to RN of the error in L2 sense of our estimator. All right? So we would like to compute this quantity. It's a very, very natural object of interest because it tells you basically what you can potentially hope for when recovering X, meaning that if the MMSC is very large, if this minimal mean squared error is very large, you cannot hope to recover X very well using the observation of Y, whereas if it's quite small, that means that potentially there exists an estimator that estimates X well. But of course here, you do not know if there is an efficient way to compute such estimators. It just tells you that there exists a function from RN to RN that estimates X well. Okay? So that's why you are talking about information theoretic limits because it's just a notion of free information but not of algorithmic hardness of estimating X. All right? So if this makes sense, of course, you can say by the Pythagorean theorem that this, the optimal estimator for estimating X given Y in L2 sense is just the posterior mean of X given Y. Okay, and so if you want to, for instance, plot the behavior of this MMSC, what you expect, of course, is that as a function of lambda, it should start when lambda is equal to zero, you cannot estimate better than the variance of X and then, possibly, it goes to zero when lambda goes to infinity. But your goal would be to understand this curve a bit more precisely than that. All right? And so that's what we are going to try to do. And here we see that, okay, since the optimal estimator here is the posterior mean, it's very natural to investigate the posterior distribution of X given Y. So that's what we are going to do now. So the posterior distribution of our signal given our observation, it's just going to be probability of X being equal to little X given Y. It will be proportional to something that is the prior that you have on your signal, this P of capital X times a term induced by your observation. And because you have a Gaussian noise, it will take the following form. So it's a lambda X, right? I recall that Y is equal to square root of lambda X plus Z and Z is a standard Gaussian. All right? And here if you expand the square that you have in your exponential, you can say again that this is proportional to x square root of lambda X dot product with Y minus lambda over 2 X square. So here I expanded the square and I also removed the term that didn't depend on little X because this term will be carried inside of this proportional symbol. All right? So that is our posterior distribution. And what's inside the exponential, we will naturally call it the Hamiltonian and rewrite for it H of X. And we'll see this as an energy or minus an energy using a statistical point of view. All right? And here once we have this notation, we can simply write what is our optimal estimator, our posterior mean, is simply the mean under this posterior distribution. So this divided by the normalizing constant. This is the optimal estimator that achieves the NNSE. And here again, we are going to give a name to this normalizing constant. So this normalizing constant is usually called the partition function. And we are going to write it as the H of lambda. All right? And so that's just another notation. And the next notation that we use is that it's quite a pain of writing this whole ratio. So we are going to use a short hand notation for this, which is quite standard here. We are going to use the brackets where the brackets here denote expectation with respect to this probability distribution. Okay? So the little X that you see here is exactly the one that you have here. All right? Okay? And, okay, so before really studying this posterior distribution and this NNSE, we'll define the last object that we will need, definition, and the most central that we'll need today is that we define the free energy as the log normalizing constant. As so F of lambda. Lambda is again the signal to the ratio at the expectation of the log of this normalizing constant. All right? And here, it turns out because we are studying a statistical problem, it turns out that it has a nice meaning in the sense that you can check that it is related to the mutual information between your signal and your observation. Okay? And so what is the mutual information? It's just a measure of correlation. This is a pullback divergence between the product distribution of X and Y and the product of their marginals. Okay? So basically this quantifies how close X and how much information Y can bear the X. But, okay, so studying this free energy is quite instrumental in statistical physics. Basically, that's the first thing you can put. And that's also what we are going to do here. And the reason for that is that we have a nice serum called the IMMC serum. So that's what I'm going to talk now. IMMC serum. And this serum was known for a long time as the brain identities in information theory. But recently, it has been developed by people like Gro, Shamai, Verdu and Roo. And these people call it rather the IMMC serum. And the reason for that is that it relates the mutual information, so the I with the IMMC. So it tells you that the derivative of the mutual information, so basically the derivative of the free energy is equal to one-half of the minimal means of error. And because here we are not working with, we'll be rather working with the free energy, and we know from above that the free energy is just an affine transformation of the mutual information. Equivalently, it tells you that the derivative of your free energy is equal to one-half. This thing. So the right hand side here is just the correlation of a sample from the posterior distribution, so a little x with the signal. And so this we are going to call it the overlap. And basically it is equal to one minus the IMMC. All right. Any questions so far? So yeah, I'm not going to prove it completely because I think you probably seen that in the lecture of Jean-Christophe Moura at the beginning of the summer school, but I just show briefly from where it comes from. So we have F of lambda, which is again the expectation of the log of our normalizing constant. So this. I'm just going to write this as a Hamiltonian properly. So this is our normalization constant. In fact, I just modified it a little bit from what we have before. I just replace the y that appeared in the Hamiltonian by its value, which is a square root of lambda x plus z. And here if you compute, you see that whenever you compute the derivative of F with respect to lambda, this makes appear an expectation of the Gibbs distribution under the brackets. And so what you get is something like this. So x z plus minus one over two x square, right? And okay, here I will not do it, but here what you have, you have three pieces. The piece in the middle is basically what you want to have. That's what's in the statement of the formula, but there are two other pieces here and there. And what you do now is Gaussian integration by parts plus then you use Bayes rule. And if you do that, what you end up is exactly what you want when half of this. And because the overlap is related to the MMSE and the free energy to the mutual information, this gives you also the first sign of the theorem, which is totally equivalent. All right, so that's always the same thing, the standard thing in statistical physics, you compute your free energy and then you take derivative with respect to your parameters in order to have access to interesting information. So here we are not particularly interested in the particular value of the free energy, but we rather try to differentiate it in order to have access to more interesting quantity as the MMSE, for instance. All right, so now I'm done with the definition and we are going to move on and apply this kind of thing to some toys model and see what we get. And so the first toy model that we'll get, I will call it sparse vector estimation. And you'll see it will be even a very toy model for sparse vector estimation because it will be the simplest things that you could imagine. So what we are going to do is we are going to take E1 to E to the D, these canonical basis of RD. And we are going to take D of the form, some power of N. So it seems a little weird at the beginning, but in fact, we will see that it will be useful. And then what we take? We take some numbers, sigma 0 uniformly distributed between 1 and D, which is again equal to 2 to the N. And once we sampled sigma 0 like this, and we let our signal vector be X equal E sigma 0. So X is nothing more than a vector of the canonical basis taken uniformly at random. OK, that is your signal and as you see, it's a very sparse vector, but on a very particular type because it is equal to a vector of the canonical basis. And once we have that, we observe again, not X directly, but some vector Y, which is X times some signal to nos ratio plus Z. And here's the only difference with the setting I had before is that I had this N here. So that will be for scanning. And here's Z again. 2 to the N IID standard Gaussian. All right, so we are exactly in the same setting as before. And then you want to, you like to recover X given Y. And here, first remark, you can argue that this sparse, this toy model is very stupid because we know that the maximum of the components of the noise for sigma equal to 1 to 2 to the N of Z sigma is approximately equal to square root of 2 log 2 times N because we have 2 to the N independent Gaussian. And so if we want to recover X, we simply say, oh, I'm going to look at the largest coordinate of Y and this will be my estimate for the corresponding vector of the canonical basis. Right? So we expect that we can recover X if and only if lambda, the signal to nos ratio, larger than 2 log 2. So yeah, in fact, my inference problem is completely stupid because here I basically know already the solution because I know the behavior of the maximum of independent Gaussian. But we are just going to use our formalism of free energy and MLC to see if we indeed get this result. And so what we do is we write our posterior distribution as before. Okay, so that's what we expect. So let's see if we can recover this. So the probability of sigma zero being equal to sigma given the observation Y, you can say that it's equal to 1 over the normalizing constant times your prior distribution on sigma zero. So just the uniform one times the exponential of the same thing as we had before. So Z dot this plus minus lambda n over. All right. And so this is our posterior distribution. And so our free energy fn of sigma, which is just now here I add an extra one over n factor for normalization purposes, but apart from that, the n has exponential of my Hamiltonian, so square root of n. Now I can write it as simply Z of sigma plus lambda n, 1. Simply because capital X and sigma are vectors of the canonical basis minus lambda n over 2. All right. So here I just rewrite the free energy. And here you can do another remark saying that, okay, I basically know this free energy and know this Hamiltonian. Mark. Why? because this is very close. Close. Run. The random energy model that you may have seen with Ocosh where the Hamiltonian is simply square root of lambda n times Z sigma. Okay, so here we have the same term of the random energy model, but we have an extra term that is created by the signal that gives a bit more energy to the configuration sigma zero. All right. So you can see this as a statistical analog of the random energy model. And in fact, we can study it and what we'll prove is that this free energy that we had above converge as n goes to infinity either to zero if lambda is smaller than 2 log 2 or to lambda over 2 minus log 2 if now lambda is greater than this. Okay, that's what we will prove. And in fact, the proof would be much easier than the proof for the random energy model because in fact, this new planted term will force only one configuration to take all the weight of the Gibbs measure. But we will see this in the second. But just another consequence of this. Consequences of this. Since f prime of lambda is equal to one half of one minus the MMSE, we deduce directly that the MMSE as the shape as we expected. Okay. There is a critical value here of 2 log 2 and the MMSE goes from one to zero. Okay, so this seems to work of course again as the result is completely obvious. But how do we prove this convergence of the free energy proof? What we can say is that just using simple upper and lower bounds, we can say that this is equal to its definition. Okay, so that's my definition. And here because I expect that only the configuration corresponding to the planted signal. So the configuration sigma zero will dominate. My guess is that this lower bound where I only take the term corresponding to the planted term. So one to the n exponential square root of lambda n will dominate the Gibbs measure. So I hope that this lower bound will be tight. So this is a thing just by considering the term as sigma equal to sigma zero. And in fact, what we have here is exactly what we have in our proposition. Saying that this is equal to lambda 2 over 2 minus log 2. Okay, so the lower one is very easy and the upper one is not much more difficult. We only use Jensen inequality to say that by concavity of the logarithm, this is smaller than our normalizing constants, but condition on the signal and the noise on the signal. And if you do that, you can actually compute all of this conditional expectation. And what you get is that all of them for the configurations that are different from sigma zero will give you this term. And the configuration with sigma zero will give you that. Z sigma zero plus lambda n over 2 minus n log 2. And here basically we are done because the first term here goes to one. And this term is of order square root of n, so it's negligible compared to the other terms in the exponential. So basically the behavior will be dictated by this last quantity. And so you get that at n goes to infinity. This either goes to lambda 2 minus log 2. If lambda is greater than 2 log 2 and 0, if lambda is smaller than 2 log 2. And this is exactly the converse lowerman that I need in my proposition. So here we have this sparse vector estimation, which is somehow a planted version. So a version of the random energy model with a signal. And here it is much easier to study because what you expect is when lambda is large enough, only one configuration will dominate whereas in the random energy model you have a few configuration of top energy that will dominate your measure. So here things are made much more simple by the signal term that we have inside of our Hamiltonian. All right, is there any questions so far? Okay, so now that we did this little warm up with this quite simple example, we'll try to connect this simple example to maybe a more interesting model. It is a model of rank 1 tensor estimation. Okay, and here we are going to take a signal uniformly distributed on the hypercube. Minus 1, 1 to the n. And what we are going to observe is observation are going to be y, which we do again, a signal to those ratio with scaling and p minus 1 times x times p times with itself plus some noise z. And so what does it mean exactly is that the entry of y that we are going to observe, y will be a p tensor so with p indices y1 to yp. This will be equal to square root of lambda divided by n p minus 1 times xy1 xp p where all of this again are iid, standard Gaussian random variable, and this is just for scaling. Okay, so in particular this model, a statistical problem where for instance when p equal to, you would like to recover rank 1 components in a noisy matrix, right? But when p increases you have a tensor version of this thing and that's what we have here. Okay, and again let's write the, because we do always the same method here, let's write the free energy that will be again 1 over n, the exponentials log of now we have something a bit more complicated as before with some of the all possible configuration. So with some of the x and the hypercube of the prior that we have an x, which is a uniform prior times exponential hn p of x where hn, where our Hamiltonian is now something like this, square root of lambda divided by n at the power p minus 1. The first term that really corresponds, that is really the analog of the p-spin, that is actually the p-spin, zy1 yp p. So that is our p-spin but here we do not have a simple p-spin because we have a signal and that's why we have also a signal term that takes this form x inner product of x with the signal, normalized by n at the power p minus this factor. So here our rank 1 tensor estimation looks really like a p-spin model plus an additional signal term that is here. And I think that probably Okosh told about that but the p-spin can be seen as, so with the random energy model, it can be seen as p goes to infinity limit of the p-spin. And so here we are going to use the same heuristic to compare this low rank estimation problem with our sparse vector estimation problem that we saw in section 2. And so we are going to do this very heuristically. So heuristically, when p is large, what we can say is that the signal term kind of simplifies a bit because we can say that it is equal either equal to 1 if the signal is exactly equal to the planted configuration and 0 otherwise. But here we are forgetting, we don't care about the configuration about x equal minus x in order to simplify things. So this signal term is basically when p is very large, either equal to 1 if you are equal to your signal or it is equal to the power p of something strictly between 0 and 1 and then you can say that when p is very large, it's approximately 0. So when p is large, Hamiltonian will simplify. And also what we can say about the random variables that are in our Hamiltonian, the cushion variables that we are going to write w of x that are 1 over 2. The first is a piece being a part of the Hamiltonian. What we can say here is that they are approximately IID standard normal. And why do we do this crude approximation is because if we look at the covariance structure, the covariance between a Gaussian variable indexed by u and a Gaussian variable indexed by v, u and v being two vectors of the hypercube, this is equal to u dot product with v divided by n to the power p. And same heuristic as before, when p is very large, this is, and forgetting about the case when u can be equal to minus v, this is equal to 1 if u equal v and 0 otherwise. So this is our very crude heuristic. So we expect that for large p, the Hamiltonian of our order p rank 1 tensor estimation problem is approximately equal to square root of London 0x minus London n over 2 and where this is IID, IID n 0 1. So we expect that our Gaussian random variable behaves like as if they were independent for large p. And here what you recognize is exactly the Hamiltonian that you had in the very sparse vector estimation problem because here again you add 2 to the power p, sorry, 2 to the n configuration and the energy of 1 configuration here x is equal to some independent random variable here plus a term indicating if you are equal to the signal or not. So here you have exactly the sparse vector estimation problem. So for large p, this is the sparse vector problem and this is analog, basically exactly the same of saying in statistical physics of saying that the random energy model is equal to the limit as p goes to plus infinity of the p spin. Okay so while this was only heuristic, in fact you can make this a bit more precise in the sense that proposition, you can say that the limit as p goes to infinity of the limit of your free energy fpn of lambda is equal to 0 if lambda is below this 2 log 2 value lambda divided by 2 minus log 2, otherwise. Alright so here this just makes this connection between our tensor and our sparse vector estimation more concrete and in fact it's not difficult to prove that the grades follow the same line as the proof for the free energy of the sparse vector estimation. So we are not going to do it but we are just going to discuss a little bit on the consequences. Is that the picture that we have now for this tensor estimation problem looks therefore like this? We have our lambda, we have our 2 log 2 here, so here we have the MMC for basically p equal plus infinity, the sparse vector estimation, but what's the consequence of the proposition above tells us that the MMC for the rank 1 tensor estimation will approach as p goes to infinity the MMC for the sparse vector estimation, so what we expect is maybe for let's say p equal 2, a low rank matrix estimation problem, but as p will increase we'll start to observe something like this, this is p equal 3 and in the end if you take larger and larger value p, the MMC of your sparse tensor estimation, sorry tensor estimation will look like this, this should be for p equal 10, you will see something like that. And one interesting fact in fact is that you can see that you always have a jump here for p greater or equal than 3 whereas for p equal 2 the matrix case your MMC goes continuously from 1 to smaller values. So let me just conclude by two remarks, two including remarks. So we have seen that this complex tensor estimation problem is in the p goes to infinity limit well approximated by this sparse vector estimation problem, but if you want to go a bit further and ask for finite p, let's say that you want to plot the curve that I draw here in colors and the graph above for finite p, we would need to use techniques like the one discussed by Jean Christophe in his lecture. Using this Hamilton-Jakobi equations to compute the limit as n goes to plus infinity, right. And if you do that to obtain a formula for the limit and this formula will give you when you differentiate it, it will give you exactly the curve that I sketched above. So this is the first remark and the second remark is that this is now very vague and very realistic but here we only talk about the best performance you can have if you had any estimators that you would like but now if you ask how hard it is computationally to estimate your signal when you're a stick, you could say here is that the sparse vector estimation problem is computationally hard. Why? Because you need to check all of the two to the n entries and take the maximum if you want to estimate your signal. And here this is a lot of unwavering but now if you say okay when p is very large as this tensor estimator estimation problem is equivalent to this sparse vector estimation, then you would believe that we could believe that our tensor estimation problem is also hard for large p. But okay here this is just an intuition that you could say here but yeah there is a I didn't claim to do anything rigorous about that. Okay so I think that's all I wanted to say about that so yes thank you very much and let me know if you have any any question. Thank you Leo. Let me actually unmute the participants so we can all thank you together. So we have time for a few questions.
We will discuss briefly the statistical estimation of a signal (vector, matrix, tensor...) corrupted by Gaussian noise. We will restrict ourselves to information-theoretic considerations and draw connections with statistical physics (random energy model, p-spin model).
10.5446/55676 (DOI)
Open online probability score, originally designed for the SMS in-person version. He's been giving lectures on mean field methods as they apply to high dimensional statistics and to non-convex optimizations. In the last lecture, we connected this to the statistical physics models. And today we're going to see a continuation of this. Today we have one lecture and this will conclude the program for this week. I would just like to give a quick preview of the schedule for next week. Next week we have a series of lectures by Elhanan Marcel on simplicity and complexity and belief propagation. The lectures will take place on Monday, Tuesday and Wednesday at this same time for one hour. And on Monday and Tuesday they will be followed by an exercise session. And then we'll also have a series of lectures by Shishendoganguli on large deviations for sparse graphs. And those lectures will take place on Wednesday, Thursday and Friday. On Wednesday they will take place in the second part. So what was 1730 UTC. And then on Thursday and Friday they will take place at this same time. You can see the full schedule on the OOPS website as well as posting of the slides and videos and exercise sheets and some references for this week's lectures. So without further commentary, just to mention again that the videos are being recorded in case you do not want to appear in the video, please disable your camera and or microphone. You will have an opportunity to ask questions over the chat as well as either in the middle of the lecture or at the end of the lectures. You can ask them in person as well. Thank you there for joining us again for the fifth and record time so far. It is the longest run sequence of this school so far and can we share the screen? Okay, so thanks to Sticky to the End for all of you. Thanks for the organization again. So today we will go down to the technical, the line of the construction of the algorithm for the SKE model and generalized spin model. So the context is that we want to maximize this Hamiltonian over the hypercube so this can be either a generalized spin model, so a model which is a sum of kind of multilinear terms, or the SKE model as a special case that is just this quadratic Hamiltonian here. Okay, and the coefficient W are given by a Geo-E-metrics. So for the argument I'll stick mostly to the SKE model until you have somehow the end and then I'll change my formula to generalize it just because the SKE model is simpler. So as anticipated in the last lecture what we are going to do, we are going to construct an AMP algorithm and this is going to take this form. So we have a vector ZT in our end and a index time instead of 1, 2, 3, 4, 5, etc. I'll index time by 0 delta to delta, etc. up to 1 and think of delta as a small constant that then will give me, you know, will reflect in the approximation ratio 1 minus epsilon on the optimization problem. And the way the algorithm will work is that, okay, the general AMP algorithm at each iteration I compute a new Z, S time t plus delta by multiplying the matrix W times a function of Z at the previous time and I subtract this Onzaghe reaction term. Okay, so what we know in general, so all of our objective is to design the function F and F of t such that this vector Z, not really vector Z, really the vector Ft of Z will give me as good as an approximation as possible to the optimum. And you know, the key thing that we will use and in fact we basically forget about the details of, you know, the finite end algorithm for most of the lecture, but the basic lemma that we use that we call before a theorem is the state evolution lemma that tells me what it tells me what is the asymptotics of the joint distribution of the vectors generated by this algorithm. Okay, so here on one side, what I have is a function, so I take any psi, so psi can be, yeah, so here is a locally Lipschitz function with quadratic growth basically. And you know, I apply this function to the coordinate of the vector Z, Z0 up to Z1, so there is a constant number of them, just one over a delta of them. I take expectation, so this expectation with the hat means just the average over the coordinates, so I invented this notation. And then the limiting probability of this is given by the expectation of the same test function applied to a vector of random variables. And what in this lemma tells me is also what is the joint distribution of these random variables. First of all, the first coordinate is independent of the others, and in fact is normal zero delta, just you can see immediately by the initialization that is up here. Okay, so here I brought the initialization and this will reflect into the fact that Z0 delta is obviously normal zero delta, but all the others are jointly normal with some covariance Q. And okay, how do I construct Q? Well, Q you construct by recursion, okay? So the new Q is just the covariance of F, the Q S plus delta, T plus delta is covariance of FS and FT. So this lemma really gives me, you know, in a sense in an isometry between, you know, my original vectors that are at finite 10 and this instead is Gaussian random variables, right? And so I can now analyze the algorithm by, you know, working on this probability space here mostly. Okay, so now I told you in the last lecture also anticipated that the key ingredient here is to impose some orthogonality condition. So what orthogonality? So I'll define MT to be FT of iterates, okay? And then I want to ensure all sorts of orthogonality condition of the following. So I want the increment. So the scalar products of this increment to go to zero as n goes to infinity. And this I want it for all less, less than T. So increments to be orthogonal with respect to the past. And you know, this again is dictated by, you know, somehow the intuition that we have about the structure of low-lying states of this SK model or these spin glasses. And so this is something that probably earlier on the bug told about in this school. Okay. So how am I going to do this? Yeah, so let me introduce a piece of notation. Let me call FT. Okay, I will not really use this notation much, but and the sigma algebra generated by Z zero up to ZT. Okay. And okay, so the way I'll construct, I'll impose this orthogonality condition is really something that you know, probably you're familiar with if you started, you know, some probability is I'll construct this by, so all the sums over at times run over the grades of spacing delta. Where? So this is the key idea, the key construction, right? So this defines this function. I'll construct this functions F in this way. The function F gives me FT, you know, MT, so I can define the function F or define the sequence of vector MT. And the way I'll construct it, I'll construct it as, you know, a sum of terms that are the previous increments each time multiplied by something that is measurable with respect to the past. Okay. And now, you know, of course, now, if you put this through the state evolution theorem at finite 10, it's useful to think of, you know, the following two, two contexts, the one that is the finite 10 space, therefore, we have three sequences of vectors. We have the, you know, the ZT that we started with, the US, UT, if you want, and UT is measurable, some function that I didn't yet specify of the ZT, and then this empty. And then corresponding to this, I will have random variables that are, okay, I'll call without a lot of imagination, I'll call it capital ZT, delta U, delta T, and then delta 10. Okay. And the relation between these two is that every time I compute empirical averages over these vectors, you know, for large n, they are converged to expectation with respect to these random variables. The way I construct the random variables is I apply the same function that I apply to vectors here, I apply them to the random variable. Okay. So I can now rewrite in terms of this random variable, my state evolution lemma. The lemma that I've wrote before, I can write it more, more concisely in the following form. Okay. First of all, let me write the relation between the set of random variable, m delta T will be square root of delta, you know, because this corresponds to m zero plus the sum s between zero and T minus delta of U delta s, and then z delta s plus delta minus z s delta. Okay. So this corresponds, you know, as I said, corresponds to this equation, the finite n equation between the vector is this. And so the way I constate if you want, this is the definition of the random variable capital m. And, and of course, what you have is that you have to impose that the random variable U delta s is measurable on the sigma algebra f s. Okay. So this, this, this sigma algebra is, is whatever is the sigma algebra generated by the random variable z from zero to s. Okay, so this is my, my, my, my definition of m. And now there is the equation that I brought before for the covariance of the z's that I can write. Since m t's is my, my function, I can write the equation for the covariance of the z's that like this. This is given recursively. So again, how do we read this, these equations? You know, given all the z up to some time, you know, t or t minus delta, I can construct a way up to some time t, I can construct the m up to some time. I assume that somebody tells me how to compute the use, right? There is some other measurable functions that tells me how to compute the use. So if, if somebody gives me all the values of z up to time t, I can compute all the use up to time t because by definition, the use are measurable. And I use this recursion to compute all the m up to time t. And then to compute the new, to move one step ahead, I use this recursion. Now I know the covariance of the z up to time t plus delta thanks to this recursion. And therefore I can construct all the z up to time t plus delta. And this way I construct a sequence of random variables z0 to z1. So perhaps I should pause for a moment, second, and ask whether this makes sense. Or if there is any question. Please, because otherwise what follows when make no sense whatsoever. Okay. If nobody interrupts me, then now, so now I came up with this construction. Was I able to ensure this orthogonality property that I was talking about? Yeah. Yeah. So physical meaning to m and u. Oh, yeah. So if m is really meant to be the magnetization. Okay. So, so again, what is the picture here? The picture that I have in mind. Okay. It would be nice to to prove more details about the picture. The picture that I have in mind is that we have this humming up her cube. And you have this tree of states. It's not the humming is the solid cube. And and m, you know, at any given time, you are going down the leaf and empty is the magnetization up to some. Yeah, when you are at radius t, t correspond to the radius at which you are. And yeah, and, and yeah, so the thing, the, the m vector is the magnetization that you have at the moment. Now I must say this is something that I realized in the inside. I mean, I mean, that is not, you know, okay, if you are, if you know about physics, this can be useful, but you know, it's not really necessarily useful for anything in the proof. Okay. So we're able to generate these martingale properties and the claim is yes claim. And delta t for t, let's say less than one and Z delta t less than one. So these are discrete time martingale. Okay, so how do we prove it? So let's go to the proof of this claim. So first of all, a couple of remark, you know, claiming that they are martingale is first of all, is sufficient to prove it for, for Z. Simply by the quick, you know, M is construct. You see that if Z is a martingale, then since you is a measurable M must be a martingale. By this, by the definition of both, by the construction of both. And second, you know, since Z is Gaussian is sufficient to check that the covariance is the covariance of a martingale, right? So Z delta t Z delta S is equal to some function that I call Q delta of t minimum with S and actually I'll call it minus delta shift. Some function of t minimum with S, but for the following I'll call it Q of that function minus delta. Okay. So obviously the thing goes by induction. So let's assume that T assume true up to T and let's try to prove the following. And what we want to show is that so we want to show that easy delta t plus delta Z delta S plus delta is some function of Q S delta for every S less or equal to T. This is the martingale property while you compute it just this is we don't know. I will compute it. So if you substitute the formula for for Z, the recursion for Z, then this is the same. You get that this must be the same as the covariance of the M by the second formula. So we use first the second formula and then we substitute M from the first formula. And so you get delta that is correlation of the first square root of delta. And then you get the sum over t prime less or equal to t minus one and S prime less or equal to S minus one or S minus delta. And then the expectation. Let me forget drop all the superscript delta for the moment us prime and then Z S prime plus delta minus Z S prime U t prime Z t prime plus delta minus Z t prime. And you see that now only the diagonal terms survive obviously. And so you get delta plus the sum overall. See S prime less or equal to S minus delta. And then you have the second moment U S prime square. And now now times the expectation the second moment of the integral of Z that by induction hypothesis must be Q delta of t prime plus delta minus Q delta of t prime. So now this is a function on your next. So this concludes the proof of the martingale property. And now give me also a recursion for the Q's right because this is Q delta S. So now how do we choose U? Well I want to choose in such a way that it has a second moment equal to one so that I simplify this recursion. So I'll choose it I can always start from any random variable with the finite second moment and make it in second moment one. So I'll start from any measurable random variable and then normalize it. And if you do this now there the recursion for Q rates Q delta S equal delta plus sum over t between actually okay I don't know say less or equal than S minus delta and then Q delta t plus delta minus Q delta t. And this implies that Q is actually and this has to be okay if you check the initialization for this recursion the initialization is Q delta zero equal zero. So together these two imply what imply simply that Q delta t is simply equal to t. Okay so we have now the covariance of this process is very simple Z is a martingale with covariance t and how do we construct U? Now how do you construct this U bar? We'll take a specific choice and construct U bar delta t to be some function lowercase U of some other process X t t and where X t is generated in this way X delta t will be generated in this way this is a term t plus delta I'll take it at time t and I add the X delta t times delta plus the change in Z. So this would be my definition of U. Now this defines entirely the algorithm so it defines the algorithm up to two functions V and U so these are two functions from reals times the interval zero one to reals. And okay they have to satisfy some some niceness properties that I will not go into detailing. So now this up to this function V defines the whole algorithm in the algorithm this defines the function that gives you at the next time so this way define this function f t in the AMP algorithm. Perhaps I'll pause another minute to see if this makes sense now. Here you know part of you know at least part of the guidance is in trying to you know I'm trying to get something that has a well-behaved limit as delta goes to zero so this kind of of you know difference equation constructing things to this difference equation is very natural given that. Now I imagine one could go you know even more general and you know not make the specific choices but these choices in the end you know you get you know broad enough to achieve the result that I wanted to achieve so and they can prove easily the existence of the limit delta to zero so they are convenient in the sense. I mean I could choose things much more generally but perhaps. Okay so now one nice thing is that now what we have is that you know remember that this q delta t is the covariance of of you know the process z so e of z delta t z delta s is q delta t. Perhaps before I brought to maximum minimum with s so it says t minimum with s so what does this mean it means that just z is a Brownian motion sample that the time you know delta to delta. Andrea the screen seems to be frozen again a bit after the definition of you and x. Yeah I see okay. People post some time to think. So I was saying yeah so what I said is that z is this covariance that is equal to t minimum with s so this means what it means that z is just a Brownian motion sample that you know those times okay now I have to analyze what you know let me rewrite now the two equation that define you know this process I have to analyze x delta t plus delta equal x delta t. Oh okay let me not rewrite it. I have to analyze this first equation where z now we know that is a Brownian motion and the other equation that I have to analyze is the equation for m that I remind you is m delta t. Now I can write it a little bit more explicitly square root of delta plus and now there was this variable u that now we wrote it as divided by the second moment. Now we know that this is Brownian motion. So the claim is okay this is not so we have to analyze these two. So it's natural to guess and it's not too difficult to prove by some kind of stochastic process so analysis is that this delta goes to zero x t all these processes converge to some limit okay it's science and this is given by n s d. I mean a couple of sd's one is of course v x dt dt plus dbt and the other is mt. Okay perhaps I'll take a little bit more space because. Okay beautiful now I don't like this having to normalize so what I can do always is choose I'll choose the function I impose the constraint that this is equal to one for all s okay or equivalently that you know e of mt square is equal to t just by it was formula the two things are the same okay and therefore now m simplifies and and it is simply. Now this gives me a you know okay so let me now write actually I'll change things now switch gear now and start writing formula for the general I did the derivation for the sk model what happens if you do the general p spin model where well it's it's more challenging but you get a very similar formula in the end except that this function psi second psi appears in particular psi second of s is here and then there is u s x s dbs and the psi second also appear in the sd for x b t x t dt so basically the Brownian motion now is becomes you know always rescale so now you get it so now this gives me at any given time the distribution of my empty so if I solve this sd I get a random variable m that tells me what is the distribution basically of the entries of the vector m t in r n asymptotically no now how do I come up with an approximate optimizer well I take m one I'll threshold it these in general can be you know outside plus one minus one so I'll threshold it to get something between minus one and then I'll do some rounding to get a plus minus one vector and now basically m one is already very close to a plus one minus one vector this is the estimate of the ground state and the claim is that okay there is a small lemma that one has to prove that says that one over n h n applied to this rounded thing that is a plus minus one vector is one over n h n applied to m at times one plus small n with a probability okay so this is okay not hard and the other lemma is okay now I computed what is the distribution of m I have still to do some work to compute the distribution of h of m or the value of h of m and okay this can be done and and the claim is that this one over n h n of m equal one is equal to okay some function of you and the plus small law of one and this function is very simple it depends implicitly on you and me but is the integral between zero and one of psi second of t e of u t x t dt okay and now so this is the value if you give me any set two functions you and me I give you a linear time algorithm basically that achieves this value that is obtained by solving the stochastic differential equation and then plug it plug it in this functional so now what I have to do to design a good algorithm is of course there are very bad algorithms here but what I have to do to to construct a good algorithm is to maximize this this integral okay so what I want to do is algorithm this design is basically the following problem you want to maximize this functional subject to well subject to the two stochastic differential equation that that I brought and then and then the two constraints that are e of m square t equal to t and then m one so I said that at the end you do the rounding and then you do the rounding and so I should have been more precise this is bigger or equal and this lemma only holds if m one is mostly between minus one one right if m one is very big of course the vector m one is too big is outside the interval minus one one then the trash the thresholding will have a big impact right so your what you want is that at the end you are in the interval minus one one and this has to be almost surely okay so now we have to solve this kind of variational problem problem okay now there are two approaches to this variational problem one is that you guess umd and you compute the value but what's more interesting is really trying to solve it and you know to see the connection with parises formula and the way one way to you address it is okay we have a laxate okay second we construct an upper bound the duality you have a question Andrea yeah does the expectation of m t squared equal to t only for sk or for general next piece of model no this way we impose the for every for every model and you see this is why why I mean this constraint is crucial so if you don't impose that condition then then what happens the constraint qt equal t you know it's difficult to impose and then it's difficult to get the continuous time limit right yeah and this is what you should expect because if you cannot impose this constraint okay perhaps we comment about this at the end and then yeah once you have the upper bound then we move from from the dual we you know once you have the solution to do a problem is quite easy to find the solution of the prime and problem and mainly sketch out what are steps one and two so I'll sketch out these two steps because they really show the connection with okay notice also that here the only place in which the fact that m is an easy model that we are solving an easy model shows up is in this constraint right it's on the endpoint constraints for instance one easy exercise is to take so let's perhaps do the easy one okay one easy exercise is to look at the case in which I look I impose only the constraint e of m1 square equal to 1 so I don't impose the easy constraint but basically the spherical constraint this infinite m space correspond to the vector m1 having norm square root of n and okay in that case this thing can be solved explicitly for instance an easy exercise is to look at this it's even easier because this is a quadratic problem over the sphere and you can check that in this case this problem gives a value the value of this optimization problem is one which is you know one half of the leading again value of the geo in matrix okay so how do we do the relaxation so step one the relaxation what instead I have a specific way to construct you and and b but now I'll take I replace you by any measurable function on the sigma algebras 50 where f t is the sigma algebra of the brain from your motion so I optimize over a larger space and over this space of course again what I have to do is I have to maximize okay so you have basically So the picture is that we have time. We have M and we have time is between 0 and 1. This has to end up between plus 1, minus 1 at the end and at each time the constraint is... So I kind of can design the diffusion coefficient to be measurable in such a way that at each time M is in the ring, M is second moment equal to T and you end up in the interval minus 1, 1. And all the time I have to maximize this thing. Right? Now the real constraint that causes, of course, make the problem challenging is... One constraint that makes the problem challenging is this one because there is, of course, infinitely many of them. Really? So what we do is that we write a Lagrangian. So one way to write Lagrangian is that I take any function from 0, 1 to real, no negative, and then I define U of T to be the integral between T and 1. And so now I call the gamma this in the same way as I call the order parameter in the Parisian formula, not by chance, but because it will end up being the same. And then I define J gamma that is the supremum overall adapted processes U of... And then R. Okay. Now I claim, and so, okay, sorry, the supremum is subject to the endpoint condition. This endpoint is always almost sure. Okay. So now this is very nice because this, of course, gives me an upper bound on the relaxation, right? And basically what happens is that, okay, so I added the constraint as Lagrange term. It's a bit different. So when the constraint is satisfied, the claim is that this term here vanishes. Okay. So this term here vanishes when the constraint is satisfied. And yeah, okay, perhaps I'll check that. So the term there is the expectation of the integral between 0 and 1 of mu T times psi second of T U square T minus 1 dT. And by the way, I brought things, this is the same as the expectation of the integral between 0 and 1. And then there is another integral between 0 and 1 and then of psi second. Now I'm writing, remember that mu is the integral of psi second times gamma. And now there is this piece, psi second of T times U T square minus 1. And then there is here, there is a dS, there is a dT. And here the integral defining nu integrated S between T and 1. So I put an integral between, it's an indicator function of S bigger than T. But you see that now if you use Fubinia, you invert this to two integral, I get here exactly, I get the integral between 0 and 1. So I want to do in dS. And then there will be psi second of S, gamma of S. And then when I do the integral over T, I get the expectation of M of S square minus S. This is when you integrate over T. And so this is 0 if the constraint is verified. So because of that on the constraint, this is 0, so you get a relaxation. And so now the point is, I want just to compute a J gamma, how do I compute a J gamma? Well, you know, you do dynamic programming. So you write some J gamma of TZ, which is the sup. So this is the optimum when I do the value of this optimization problem when I start with stochastic optimal control problem, where the particle, basically the particle mT. So the picture is here m at time t is a position set. And then I have to optimize between T and 1. So this is the super where all controls that are adapted between T and 1 and then of the same, basically, objective that I go only between T and 1, then there is psi prime of S, U S plus 1 half mu S psi second of S, U square of S minus 1, d S. And this is always with the constraint. Now the constraint has to be, since I start at Z, the constraint has to be the Z plus the integral between T and 1 of psi second of S, U S, d B S, is minus 1 1. Okay, so now you have this quantity and now you've write a dynamic programming equation that gives you this in terms of sum. So now I will stop writing details, but in terms of J at time, some bigger time theta and some other position, Y, okay, and the control for some theta bigger than T. And now once you have a dynamic programming equation, at least formally, you can take the limit theta to T and you get Hamilton-Jakobi equation. Hamilton-Jakobi-Bellman equation. So a PDE for parabolic PDE for J. Okay, so this parabolic PDE allows me to compute J gamma and okay, the connection with the original thing is of course that J gamma is equal to J gamma of zero zero, the original upper bound. So you get a PDE for this functional J gamma and the value of the relaxation is less or equal than J gamma at zero zero. Okay, now how is this related to the Paris PDE? Well, basically what happens is that up to a change, up to a small constant, basically if you, instead of working with J gamma, you can work with the, with the Legendre-Fentrell dual of J gamma, namely phi gamma T x is, okay, now I don't remember it, but there is J gamma T z and then there is something like minus z times x and this is probably mean over z and then there is some constants that, some other factor here to, to, you know, eliminate terms that are not interesting that I don't remember, but basically once you do this transformation, then you, you, you, you can check that this phi satisfies. Okay, so now this, this gives you and, and the corresponding relaxation now you get by, by doing this transformation, you get that the relaxation is less or equal than P of gamma. Okay, and you can carry out all of this program as long as gamma in this, is in the space at least, was carried out as long as gamma is in the space L gamma and therefore you can stick here in inf. And at the end of the day, you can verify that is if the inf is achieved, you can reverse all the step and come up if inf achieved. That's gamma star. You can reverse all the steps of these proofs and come ups with the two coefficient that we wanted the tx and the utx that are actually derivatives of this function phi. Yeah, so this in a sense at least to me, it clarifies a little bit the mystery of this Paris's formula in which you start with the maximization problem and the values given in terms of the minimization problem. And at least now in my mind, the way I understand it is that you have a maximization problem, you have an algorithm that solves that maximization problem that is described by a stochastic process and you maximize the stochastic problem, you get the value that the algorithm achieves and really the Paris's formula is the dual of that problem. Okay, in the end, of course, as we talked about in last lecture, you get the Paris's formula except that on a different variational space that doesn't have any more than on decreasing constraints. So you get different value and this coincides on the optimum only under the normal well up gap condition. Okay, I guess that's all. Thank you. And thanks again to the organizers of this and to everybody that stick that out. Thank you, Andrea. I will unmute all the participants so we can all thank him together. So there's you can feel free to you.
Starting in the seventies, physicists have introduced a class of random energy functions and corresponding random probability distributions (Gibbs measures), that are known as mean-field spin glasses. Over the years, it has become increasingly clear that a broad array of canonical models in random combinatorics and (more recently) high-dimensional statistics are in fact examples of mean field spin glasses, and can be studied using tools developed in that area. Crucially, these new application domains have brought up a number of interesting new questions that were not central from the viewpoint of statistical physics. These lectures will focus on these new questions: (i) Statistical questions: what is the accuracy or uncertainty associated to a certain statistical method? (ii) Computational questions: can we efficiently compute marginals of a Gibbs measure? Can we generate low-energy configurations? The following is a rough outline of the lectures: 1) High-dimensional statistics. General setting and key questions. The role of sharp asymptotics. Examples and general phenomena. 2) Message passing algorithms, and approximate message passing (AMP). Sharp analysis of AMP. 3) Optimal AMP algorithms. Connection with Bayes error. Connection with convex optimization. 4) Replica symmetry breaking. Parisi formula. Computational implications aspect 5) Optimization algorithms for mean field spin glasses. This course will be accompanied by exercise sessions.
10.5446/55705 (DOI)
Hi, my name is Margelt Plank. I'm from the from TIP from the Leibniz Information Center for Science and Technology in Germany. And my colleague Jens Köster and I have prepared a presentation on the topic of key features of video abstracts using the example of the TAV portal. Science isn't finished until it's communicated. This is what Mark Walpot said. He's a medical scientist who has a chief executive of UK research and innovation until last year. And what he means by that is that science and new technologies surround us in our daily lives. More than ever before the COVID-19 crisis showed that science has a very big impact of our daily lives. Every citizen needs to be able to understand enough about these scientific results. However, how can average citizens understand scientific studies and results? How can they distinguish between false and truth? How to bridge the gap between science and society? The best way to do this as a scientist is to communicate in a simple and comprehensible way. The medium of video is particularly well suited for this. An example of this trend is the video abstract. This is the three to five minutes long moving image equivalent of a written abstract. With this format, scientists have the opportunity to explain the results and backgrounds of their concrete research work, as well as the methods used, the study results and possible implications to a potentially larger audience. There are quite different ways, styles and formats to produce video abstracts. Here are a few examples. Likewise, the mutants are here, a video on the social life of ants. 2017 cell press 2200 views. And they use different techniques from whiteboard to animation. Next one is turning up gene expression to treat disease. 2017 cell press 5650 views. And this is made almost like a feature film with a continuous storyline. And it's also very funny. Next one is how dogs process speech. 2016 cell press. And this is probably one of the most successful video abstracts ever. 480,000 views. And they are using talking hats and filming of the actual experiment. And last one is Professor Paul Young in front of his house in a very relaxed atmosphere talking about a very complex mathematical problem. Video abstracts can be embedded in websites of researchers, institutes or science blogs. They can be shared on social media and can be included in lectures. In this way, they develop a visibility that reaches into different social economics spheres. Science journalist, decision makers and average citizens can thus find out about the latest results in discussions in science. And this promotes both a transdisciplinary research activity and informed decision making based on scientific facts. And that's exactly what Sir Mark Walport meant when he said science is not finished until it's communicated. Additionally, Scott Spicer showed in a study in 2014, which is based on the new journal of physics, that articles accompanied by a video abstract are more likely to be downloaded than those without. Of the top 25 articles with the highest usage, 36% had a corresponding video abstract. In addition, video abstracts positively influence the citation rates of scientific articles as Song et al. explained in their study, which is also based on the new journal of physics. Accordingly, a video abstract is a useful tool for converting video views into downloads of online articles, especially when published in open access journals. A video abstract is usually directly linked with a scientific paper that has been accepted and published in a scientific journal. Virtually all major publishers accept video abstracts such as IOP, Elsevier, Wiley, Taylor and Francis. And video abstracts can of course also be published on YouTube and Vimeo, but there are also some portals which specialize in publishing scientific works and video abstracts, such as for example the TABAV portal and We Share Science. A lot of publishers have published design rules to help scientists produce their video abstracts. As an example, here are design rules of the Institute of Physics, which are very generic, but even that is representative. And they include, for example, that a video abstract should not last longer than four minutes and that it should include additional relevant materials such as images, animations and lab footage and that a video abstract must include a soundtrack providing a clear verbal narration of the visual content. We conducted a study ex-amining video abstracts on the TABAV portal and therefore I would like to briefly introduce the portal. So the tip AV portal is an open portal for sharing scientific videos. We've been online since April 2014 and we use automatic video analysis such as speech, text and image recognition and also semantic indexing in order to have a better access to content in the video and discover things in videos. We have more than 35,000 quality check videos in German and English from our focus areas such as architecture, chemistry, computer science, physics, mathematics and engineering. We have conference and lecture recordings, teaching, learning films, experiments, video abstracts, of course, simulations, visualized research data and documentaries and the videos are predominantly under CC licenses. So in our study we wanted to find out whether the most popular video abstracts on the TABAV portal follow the common design and publishing rules or not. Our study provides a characterization of video abstracts published on our portal in the Areas of Science and Technology. We identified video abstracts based on the number of views. A database of the highest ranked 33 videos from four publishers was created. Each video was manually analyzed for different parameters. Based on a literature review, including design rules, we examined some content factors like video length, production, format and audio, set to have a great impact on the popularity and added some more content and general factors. For example, was the corresponding paper named in the video and linked permanently with a DOI and was the research data visualized in the video. So this one is our number one video abstract. The title is Soil Lecker Peel Do It Yourself. This is how it appears in the TABAV portal. You can see under the video the number of views. You can share it. You can cite it. You have the authors here and the publication here. Then the formal metadata, again title and authors. Then you have the license identifier. This is all the videos have DOI. The publisher again, the release date, language, subject area and abstract. Then finally the corresponding paper here with the DOI. Then some recommendations for further videos. Okay. So first the subject general characterization. 33 videos as I said 19 from physics, nine earth science. One each from informatics, biology, information science, engineering and environmental science. The licenses 19 were licensed by CC by 3.0 DE. 12 by CC by 3.0 unported and the rest NC and D and NC and D. Did the video abstract has an intro or a title slide? Yes for the majority 30 videos but 3 didn't have a title or an interest slide. Was the paper mentioned in the video abstract either via DOI or via title? 19 yes, 14 no. Now some characterizations from the production. The length of the video, the average length was at around 4 minutes. And the rest was somewhere a bit over like 45 minutes and we had I think two which were even 6 minutes. Next one, visualized scientific data included either via a graph or models etc. Some visualization of the data 25 included this in the video abstract and 8 did not. Was the production rather amateur style or semi professional? 23 were amateur style which means that it was rather like a screencast only and 10 used semi or professional, semi professional styles. Which also included produce material such as footage, high quality audio and design. Then the general genre 16 was a very simple use a simple presentation and 8 documentary style. 3 used animation which is like using techniques like whiteboard drawings etc. 4 used a dynamic presentation which has something to do also with dynamic cuts and included footage material and one only used monologue style. When it comes to additional elements and materials, the majority used graphics, stills, slides and pictures to make their presentations more... interesting. So 27 used graphics, stills, slides and pictures. 25 used text in the video. 11 used animations. In 16 videos we saw the present test of the authors. 16 used elements of interviews and talking heads. 8 showed the experiment or the observation themselves as footage material. And 12 used a documentary real life footage. When it comes to the production of sound quality, we found 17 that had a good quality, 10 had a medium quality, 3 not so good quality and 3 didn't use audio or used music only. Ok, so what does this mean? We thought it was quite good that the videos from the analysis were not longer than 45 minutes in average 3.55 minutes. Most video abstracts did include additional relevant materials such as images, animation and lab real footage. Most of the video entries had medium to good sound quality, 82% and were thus really acceptable acoustically. What was not so good? Well, most videos used just one presenter speaker who gave a classic simple short presentation. Only half of the authors linked to the papers. I mean this is the general usage of video abstract to link to the paper of course. Only 3.25 included a visualization of their research data. One video was produced in a portrait format which was a bit unlike the others. The use of background music seems unmotivated most of the times and some didn't really fit into the video. A few videos would have to be edited because for example the interface of PowerPoint is still seen or the interface of the screencast tool is seen in a video which doesn't really make a good impression. 3 did not even present a title. So we have some additional guidelines from our findings here. For example, that make sure your video has a DOI so that it is sustainably sightable. Make sure you have credits for your paper, best linked via digital object identifier. Make sure you link to the underlying data and make sure your video has an intro and maybe also an outro. Make sure you mention all of the authors in your video. Make sure you share your video on at least one science compliant portal so that it is sightable, digitally preserved and connected to research data and paper. And we highly recommend the tip AV portal. Okay now we have some literature for you here. If you have any further questions please don't hesitate to contact either me or my colleague Jens Kuster. Thank you for your attention and bye.
Today, more and more scientific videos are published online. One visual format that seems particularly suitable for communicating scientific content is the video abstract. This is the 3-5 minutes long moving image equivalent of a written abstract. With this format, scientists have the opportunity to explain the results and background of their concrete research work as well as the methods used, the study results, and possible implications to a potentially larger audience. Recently, some studies have been published on this subject, in particular with a focus on content analysis for specific domains and classifications of online videos in general. This paper explores the topic of video abstracts and publishing guidelines in order to answer the following question: „Do authors follow the existing guidelines and are the guidelines sufficient for the publication of a scientific work?“. In a literature and web review we looked at the existing publishing guidelines and extracted the major rules. A database from the most viewed thirty-three video abstracts, published on the TIB AV-Portal was created. Each video was analyzed for different criteria such as link to corresponding paper and research data, length, formats etc.. Results indicate that the most common guidelines were followed by the authors, such as max. 4 minutes length, inclusion of additional relevant material such as images, animations, and lab footage as well as good audio quality. However there is still a lot of potential to get more out of video abstracts e.g. adding a title and a link to the corresponding paper and research data.
10.5446/55678 (DOI)
I think we can start the streaming. Okay. Where's our. Oh, I guess we don't need necessarily the oops. Slide. Oh, I just took it off. I can put it back though. As you wish. Doesn't matter. And the people to break out rumor back. Is that. I didn't do it today. Yeah. Okay. Okay. So. We're back for the second lecture of the series of lectures. For the. So, I'm going to start with a quick. Over the chat. He's been giving a. A series of lectures and simplicity and complexity on belief propagation. Again, the video will be recorded. So if you do not want to appear in the video, please. Turn your camera and your microphone off. And then we'll start with the next lecture. And then we'll start with the next lecture. As well as once we stop recording the lecture at the end of the hour. Today's lecture will be followed after a half hour break. We'll be followed by an exercise session. That will be led by Frederick. Using some of the exercises that were mentioned in the previous lecture and possibly in today's lecture. So if you have any questions or comments or questions, please feel free to ask them in the comments section. If you have any questions or other details of proofs, please stick around in the same. Zoom number. For the exercise session. Okay. Okay. Thank you. Please help me by slowing me down. I think whenever I teach a class. I'm asked, I asked students, you know, if I teach too slow or too fast. I'm not asking questions. Okay. So we'll continue exactly where we were. Last, last lecture. And I'm going to share the skin with you. So we were looking at this picture in two different ways. If you remember this picture, this picture is a picture where. On one hand with the thing about this picture as the broadcast process on the tree or this Markov chain on the tree. Well, each bit of this, each bit of this image is going through this process of copying with probability data and randomizing. Otherwise, you guess these pictures of the leaves. That's one way of thinking about it where the trees known. And we're just looking at this broadcast process and maybe you want to recover the root. And now I'm going to talk a little bit about formally. Still we are talking about the linear theory if you want about the other problem in the other problem we are. Here's my pen in the other problem. I will again erase the information that we don't have in the other problem. We do not have any of the information up here. Even before we didn't have the pixel, but now you don't even have the structure of the tree. And all we have is the vector in red here and some are from that. We want to recover the stock. Okay, so that's the, that's the problem. I'm going to talk about. And again, we're going to see the significance of the Casting Stigl. Any questions about this? Okay, so I'm going to give a simplified version. You know, of a sequence of work. I mean, what I'm going to present today is much simpler, but not in a way that pertains very importantly to the first transition. Series of work that I did at the beginning of the 2000s and then some with Mike Steels, some with the Scalakish, the Scalakish and rock. And essentially say that there's important first transition that happens. According to the condition if two Tata square is bigger than one or two Tata square is less than one, the reason that we have a two here is that we are looking both at the case that Q is equal to the symbols that we're looking at are beats are either plus or minus and at the case where D is equal to. So every node is going to have exactly two children. And that's the reason that this number two here is two. This is the D. And what does the theorem says? The theorem says that for this partner of reconstructing the tree, which I'm going to define in a second, a little bit more formally. The amount of information that you need, you know, in biological term, it's the lens of the DNA sequence in terms of the picture that you had before. It's the number of pixels in the image that you need L that's needed to recover T is logarithmic in N. I should say what N is. N is the number of nodes in the tree. It's logarithmic if N above the phase transition if two theta square is bigger than one, and it's a power of N if two theta square is less than one. Okay, so somehow the first position that we've seen before for this correlation, the K phenomena also plays an important role for this inference task. And maybe we'll see another example for different inference tasks in a little bit for this inference task of building the tree from independent samples or sequences. And again, it goes from logarithmic in the size of the tree to to polynomial in the size of the tree. Another way of writing. So remember log N is essentially the depth of the tree up to a constant. Okay, so the number of samples that you did is all the depth of the tree if two theta square is bigger than one, and it's exponential in H if two theta square is less than one. So this is a pretty radical phase transition in terms of this parameter. Questions about this. Okay, so maybe let me define the model a little bit more formally so I mean, so let me define the model a little bit more formally. I think much, much more simplified than the, or a little bit more simplified than the process that were described in the actual theorem. And the reason is biology is somehow more less heterogeneous than, you know, this idealized model that we're looking at. And I'll talk about it. Maybe I'll comment about it a little bit. But just to get the picture of the first transition, I think we can do is this very simple model. We have the broadcast process on trees or the Markov chain model of three for H levels. So we are going to have this the usual picture that we have here this is going to be our vector xh and this is going to be two again this means that every node is two children. Okay, so this is going to be equal to, and it's a complete binary. Okay, so that's the picture that we're going to look at. The key issue is that there's an, we want to recover the structure of the tree and I don't believe there isn't a very compact way of saying it one way of saying it is that there's an unknown permutation sigma of two to the of the leaves so this is s of two to the H because they're two to the H leaves and you know you don't know how to order them somehow. Again, this is an H to the H and what is happening is is what's happening is you're getting IID samples of the vector xh except you don't actually get the vector xh you get the vector xh tilde. And what's the difference between xh and xh tilde so maybe I'll do the picture again. If we knew how to do the t, we would you know this is coordinate number one this is coordinate number two this coordinate number three this coordinate number four and so on and so forth. And as biologists we would say, I coordinate number one and coordinate number two are very close together there are sister species. Okay, but we are not we are not given this information we are given this vector permuted so that's what what's given here. So this is an independent sample sample from the same distribution as xh t and then what is xh tilde xh tilde is exactly xh by permute but permuted according to the permutation sigma. Okay, so maybe my first coordinate is going to be this coordinate my second coordinate is going to be this coordinate my so that you know it's just given by someone you know this is how you collected the data you don't really know which pieces are related to which pieces when you collect the data. Okay, and what is the goal. The goal is to recover the tree. Okay, so this is not really defined. One way to try to define that you want to recover sigma, but you actually cannot recover sigma so why can't you recover sigma this maybe I'll just say it in in right so this tree with species a b, c and d is the same as this tree, where here I have d, c, a and b. I just, of course, when I wrote it down, I wrote, you know, if you think about the why, why am I giving the leaves names the leaves are the species this are you know, this is one type of monkey is another type of monkey or an ape is another type of a there's another type of it. These are the names right, but when I do this tree and I don't do this to decide this are exactly the same thing it means that this species of app is close closest to this species of app and this piece of it is close to this piece of it. So this is exactly the same right so when I write recovers, I mean I recover sigma model the ways of doing the three so model gamma so you know there's a group to write the way of writing what this gamma is, but the gamma is all the possible ways of doing the three so that it's the same. So I'm going to write the A B of course if I write the same if I look at H equal to now already for this I'm going to actually draw all the trees to you it's a B CD that's the one that we had before it's AC BD and it's a B BC. The one that's possible to that you have on when you have two levels, right when you have two levels. Once I decide which is the closest pieces to a. I also decided which is the closest pieces to, you know this determines the partition and I get the team in this case. If you want to think a little bit about what happens when age is equal to, you know, the combinator is will give you seven times five times three times three. This is correct I just calculated it five minutes before I joined the zoom call so, but essentially a has to be much to something, then the next layer has to be much to something that the next day has to be much to something which is determined, and then, and then there's the ways of drawing the next layer in this way that we did. Okay questions about what we want to do. Okay, so. I'll tell you how to do that. Okay, so this is what you want to do. I'm going to tell you how to do that. You know that's the best thing in life. It's like you teach undergrad linear algebra something needs a sort of linear equation. Here's all you serve linear equation. So is how you're going to do that. So I'm going to just do recover this tree. It's going to be some reclusive procedure which I'm going to describe to you. The reclusive procedure is going to be the following for every two species I and J so I and J are coordinates of the vectors that I get all species. I'm going to estimate the covariance of the ice coordinates of the J coordinate so I'm going to look at letter number one of the DNA letter number two letter number three I'm going to compute the empirical covariance which is just an estimate of the number one. Why am I going to do it. Now I'm going to say okay I'm not number one, which is the guy that's most correlated to me well it's in the permutation this is this not so this guy is going to be my sibling but then the next note I'm going to ask who is the guy that's most correlated to me this is going to be this guy and so on and so forth that's what I'm going to do I'm going to pair them up in this way. So I'm going to do the recaration so suppose I did my procedures I decided that this guy a is with the this guy see is with the, and so on and so forth I decided to match them up. Now I want to recalc how am I going to do because I'm going to try to do the same thing for the parents. I'm going to try to match to find the right matching among the parents. So what is for the parents I have to know what's the sequences or what's the values here, and I don't know what the values because they're not given I didn't collect data on species that do not exist anymore that are distinct, that are extinct right so I cannot do that. So what I do before I estimate what are the sequences so how am I going to do it I'm going to do it say using the majority estimate I'm going to define a new vector Z, and Z is going to be for each coordinate is going to be the majority of the descendants. Okay, so now some of you should have an objection this is a little bit good point of objection, because what is the majority of two beats. The majority of two bits well it's the beat it's that bit if they're the same. And you know what do you do if they're not the same so they're very often they're not the same maybe you choose a random. Or maybe you completely discard this, the sample and you say that for this sample you're not going to complete the correlation. So this is a little bit of an issue you do is just to, but when you will repeat the process maybe you will figure out that this guy and this guy and it connected this guy and this guy and this they're disconnected. So when I'm going to repeat it again I'm going to compute majority on four values. So when you do majority of four values it's a little less likely that you're going to be in the situation that it's even out. So this issues about things being not well defined is a little bit less of an issue but in any case it's not an issue. Okay so you're going to repeat this this way we are going to recover the structure. What is what are the exercises for you. So I'm not going to, I mean, you know enough in order to prove everything that's written in the slides I'll tell you what are the two exercise. So let PMH be the probability of recovering the three from any independent samples. And the following if they two theta square is a bigger than one we're above the question stigma and the number of samples is less than a constant times h the constant may depend on it does depend on data, then the probability that you recover the correct is at least 0.9. So you do this procedure with number of samples that is just a constant times the depth, and then the probability that you recover the correct is at least 0.9. So if you learn if two times square is less than one, then the probability that this specific procedure works is less than no matter what how large m is is less than the number of samples and again is the number of samples. So you can see the sequence times some other numbers see data which is less than one to the power h. What does that mean. It means that in the region where two theta square is less than one, the length of the DNA sequence or the number of samples that you need is exponential in the depths of the. So it goes from linear when to take a square is bigger than one to exponential with two times a square is. So M is what was a long the theorem statement. Yeah, so very good. Thank you all for remembering m is equal to L for in locally and is equal. So in this theory statement it goes and I am if yeah so in the theorem statement. Thank you. Good. Other questions. Nobody likes to serve exercise. Is there a question. There was a question in the chat, whether the optimal constants or no. So the optimal constants are not known. I mean, this optimal this constant is not known which I think is slightly less interesting. But I think this constant is actually interesting and this this constant is not. Because this tells you what is the what is the right exponent. And we don't know what is the right. So that's a very good question and I think it could be a good research problem. So definitely the proof that I know are very are not are not fine enough to answer this question. And, but just to be clear, I mean, I think the right question to ask is what are the right expensive you're using the best algorithm. You know just presented a specific algorithm. You know you want to ask what's the best, what's the best constant or use any algorithm. And related to that, let me actually give you a way of proving exercise number two, and they're proving exercise number two will show that this is not dependent on the specific algorithm that we use. In fact, any way you are going to try to recover this trees you are going to be unsuccessful. If two status queries less than one if the number of samples is not exponentially. So that's what I'm going to do next I'm going to put something more general so next, we're going to do something more general. And clarification from Thomas use that the covariance is the sample covariance. It's the sample. Right and the reason why you need log n maybe I'll just give you in this that you compute a lot of covences you want them to be concentrated so you need to use some concentration bound to show to show that they are concentrated around what they should be concentrated. And it's also known that there's other argument that shows that log n is this you know this this log n or this constant times h is necessary from some simple information to your country. This law, the fact that you need more than constant times h is well known by classical working. Good. So let me give you the more sort of the more general argument and the more general argument says that if two status queries less than one, then you need an exponential in each samples in order to recover the tree. And this is there maybe I didn't say it in the title no matter how you do it. So I'll just give you this argument. This is a very simple argument. And then I'm not sure if I'll give you any more arguments in this course ever because of course I'm running out of time but let me give you this. Okay, so let's let's do this argument so forth. First I want you to start with an exercise that relates to something that we've seen in the last lecture. We've seen this for a fixed tree of h levels. So here we have h levels. We can either start with a plus, and then the measure that we get is PT plus or we can start at the minus of the root. And the measure that we get is PT minus. If you remember in the last lecture we had a very simple base rule, which says what's the, what's the, what's the change of measure when you look at PT plus PT minus versus PT. And then you can see that the total variation distance between PT plus and PT minus this or the distribution on the list just for one sample is less or equal than two times. I think it's okay maybe it's actually equal two times the expected value according to the mixed measure will take half of this and half of that, but the absolute value of M of h M of h was the magnetization of the root. So that's the value of the expected value of the root given the boundary. And what we've seen last time by induction is that this is less than, sorry, applying Koshish valve. I mean you have to prove this but I'm telling you how to solve this exercise, applying Koshish valve and using what we've seen last time about the exponential decay of this quantity, the L2 norm of Mh. This is less than two times two theta square to the h over. That's the exercise that I want you to prove it follows pretty easily from seeing things that we've seen in the last lecture. Okay, why does this help us now I'm just going to show you something a little weaker than the claim but I'm going to look at two very big trees. Each of them consisting with four sub trees. So this is a big subtree, which I'm going to call capital capital capital B capital C and capital D. And I'm going to be this tree. And the other thing say I'm going to match subtree a with subtree C. Sub three B and sub three D. This is C. Oops. Hey, see. Okay, so I'm going to be a B and D. Okay, but these are the same trees a some big tree of each level B some big tree of each level. So when I say I mean, you know, the names that are written here are exactly the same names that are written here the names that are written here exactly the same name of the species that are written here and so on. So I claim that from the previous claims it follows that if I look at the total version distance of the, the full distributions that they see here, and the full distribution that they see here is essentially at most is essentially at most the same thing so instead of two times this quantity is going to be a time. Why is this tool is it for some stupid reason what's the stupid reason I'm actually going to condition on the value here here here here. I'm going to tell you what are these values. If I'm going to tell you what is this value everything is independent and I'm asking you, you know, what's the total version that they get that they get between this measure and this measure and this measure and this measure and you know what you're going to what you're going to see is you're going to see that the same bound that we had last time applies applies. Okay, so the reason that there's eight it's four times it's sort of the fact that total version distance when you have multiple coordinates. It's, it's, it's, it's, it's the linear bond right so the triangle in the triangle or the tensoring inequality holds. And once you have that you can apply for independent samples so now if I have independent samples. Again I apply the same fact about total version distance for independent coordinates going up linearly and I'm going to get that the total version distance when I have m samples or independent or DNA sequences of lens and is going to be at least eight m some two taters square to the H over two. And what does that mean. It means that in order to distinguish between the two topologies I need this quantity to be close to to be at least one half or at least you know it has to be a non negligible number. Because if the two distribution of the samples that I get from one case and the other case are very close together it means that the distribution are not distinguishable. In order to be distinguishable the total version distance has to be close to one. So this has to be close to one but this in order for this to be close to one. It means that the number of samples that they need has to be exponential in age has to be at least two taters square to the power minus H. Okay, and that's the. Okay, now let's me do something non traditional I think we are going to maybe take a two minute break soon this is the break I promised you tomorrow. But before I do that, I want to make a decision. I mean, another application of this is the block models, but I sort of have to decide if to talk about it or to go into the non linear theory. So here are two questions for you the audience. Let's see if you know how to press all of your buttons on your zoom. Can they can the hand yes no or no. They should be able to. So, okay, you are the participant lists, but the bottom there should be a button there for raising hand and such. Okay, so two questions for you the first question is how many of you have seen a recent work on the block models and its relation to to Keston stay Gumball. So raise your hand if you've seen it. See one hand. Yes. Okay, that's quite a bit. Okay, how many of you want to hear about the block model and its connection to this to the data square equivalent the Keston stick on bar. That's a different question. So how many people are actually listening. Okay, let's lower the hand. Okay, maybe in order to know how many people are listening how many of you do not want how many of you want me instead to go and talk immediately about some non linear theory. Some non linear. Yes, I can't figure out how to raise my hand seems to be a couple of your co host you might not have this. Okay, so it seems like your people so okay so let's take two minutes break in which I'm in which time I'm happy. I'm happy to take questions. And I will see in the meantime I will say I will think how quickly do I want to go over the block model, the block model slide it seems like the slide preference that they say something about it so I will. Okay, so, so any questions. We start again in two minutes which are called. Yeah, I can see the chat. And is there any guess what happens in the critical case when to set a square d is equal to one. So everything is known about it in the case of the, of the, of the broadcast process for the broadcast process, even actually the arguments that we gave with the second order term this minus data to the four blah blah blah. So the information between the root and the leaves decays. It doesn't decay exponentially decays just polynomially. I think for phylogeny is nobody actually did this, but you know, I think somehow what I didn't talk about in phylogeny it's actually doesn't make sense to assume that all the branches of the same data. So that's why you need to assume the different branches of different papers, somehow, maybe the critical case is less natural. Right. So, I think it was natural in our case to assume that all the just have the same data just to explain the main connection between the correlation decay and constructing the trees but from people who actually do phylogeny. If you tell them that all the tests are the same they will laugh and go away. And that's the reason why you don't want to do that. Okay, so let me share my skin again. There's no more questions. Okay, I'll show my skin again. And I'll continue with at least next time we will do nonlinear nonlinear theory I promise. So, a little nonlinear theory that we know something not good is happening. Okay, so maybe my, my, my, my, my, by this look. Okay, good. Okay, good. So this is where we were. Okay, so I'm going to talk another another application I'll try to do it briefly so at least we get the glimpse of the nonlinear theory today. Okay, I will try to talk slowly because I wanted to talk slowly. I will try to talk slowly and skip some information so we get to the nonlinear theory. So here's the block model, you know this is a, you know, one of the reasons I wanted to skip it this. There's a, maybe more popular work in some of the other work I talked about, but I will discuss it and nonetheless. So it's a random graph model on n nodes what's the twist, the twist is that the nodes of colors, half of the nodes are blue and half of the nodes are red. If you want to talk about the case where q is equal to two. So q is equal to two is there's going to be a generalization obviously for using a little bit. When two nodes of the same color are connected with probability, which is given by this formula. I chose the parameters in a way that will be clear in a second it's to a so every two nodes are connected with the d one minus data of a man. So the same color does an additional probability that they're going to be connected which is to the data. Okay, so that's that's the basic model. And, and why did I do that. I did it in order so that you have to check that or maybe this note should have been an exercise. So I'm going to be using this model the average degree is D. And if you is a neighbor of the so you will see me means you the neighbor of the then the expected value of x ux V. Now again I think about the colors plus minus one is going to be data. So it's similar to what we had in the broadcast tree model in the broadcast three model. If you look at the parent you and a child V the expected value of x ux V is going to be data. And the data copy and otherwise I don't know. Okay, so that's the basic block model I saw I didn't give the references it goes back to the 70s it's widely studied widely studied model and the inference question that we're going to ask now that active lots of tension in the last eight seven eight, maybe 10 years is now is which nodes are likely to be red or blue. Okay, so that's the question what we want to know. Of course I told you that half of the notes are blue and half of the notes are red. And I, you know, when I'm asking you this question which are the blue I'm not going to tell you which one. So let's see what do I mean by that. And maybe I'll tell you what got me into this business and a bunch of us did to this business was this conjecture in a paper by the cell Kazakala more and is the borrower, which says that belief propagation is the optimal algorithm for solving this problem. And you can do better than random if and only if the data square is bigger than one. So this sounds very related to what we're talking about you're seeing the casting steam bound again the data square bigger than one. We're seeing belief propagation again, maybe we'll talk about what the propagation wins in this setting and supposed to be the optimal. Maybe a demonstrated picture what this model is. So I start from this model. Okay, maybe this is not a good sample but some of the points are blue some of the points are red between two points of the same color doesn't add some probabilities between two points of different colors doesn't add different probability that's a sample for the model. What's the influence problem influence plummeted that I'm given exactly the same graph but without the colors. I'm given this picture and somebody is asking me please tell me which nodes are red and which nodes are blue. I'm just from the picture of the girl. And what is the goal the goal is to say well these nodes are blue and which these nodes are red. I mean, let me just mention briefly two comments. There's a natural symmetry here, which is the red blue flip. So if I switch all the reds to all the blues and all the blues to all the reds, you know, I'm going to get exactly the same probability distribution. So if I get a sample like that, without the colors. I can't really say which are in which are blue I can say you know, this is one class and this is another class one of them can be red one blue or one blue and one red. And the other thing is that I mean we are looking at the pretty sparse graph. So we cannot really recover everything cannot recover all nodes, even up to this global sleep. Flip and this is because if you just think about it. If you look at an air the shiny random graph and average degree some constant did and are many isolated nodes for isolated notes there's no way for you to know if the blue or red right. And there's a bunch of small components there's a bunch of stuff that does you know. So what you really want to do is you want to infer a little bit better than random. There's this notion of detection, which is to classify better than random. Okay, so that's that's what we want to do. Questions. Okay. Maybe I'll tell you one easy theorem which you know I will do by pushing slides forwards and moving my hands let's see if I can control my art this art well enough in order to convince you that they know how to prove theorems. Of course I'm going to cheat in a number of places but let's try to do it. You know how we said your name and an island slide we got into this. So the easy direction is the following if you're below the custom stigma bound if the data square is less than less or equal to one, then it's impossible to infer better than run. So let me show you why this is to me this is. So like I said the easy direction. So I just showing pictures. Okay, so what is the picture. If I'm able to infer better than random, then I'm able to say better than random if this node and this door node, these are two random nodes have the same color or a different color. If I ever classification that better than random then I can look at an average if I give me two nodes, I can classify better than the random. Of course by symmetry. You might as well assume that I tell you that this node is red and the question is, can I classify this guy, the green guy better than one. I'm a very generous person. I'm not just going to tell you that this guy is red. I'm going to expose a level H neighborhood of this vertex in the graph. A neighborhood of radius H of this vertex is going to be with cyber ability tree. And I'm actually going to tell you the colors of all level H of the. So I'm going to give you this guy and all of these guys and your goal is to guess the value of this. So what is the point to you. The point is that XH is essentially level H for broadcast process. And that when D data square is less than one, you cannot recover this guy the green guy from XH with good probability or in the sense that as edge goes to infinity, the probability of correct recovery is or is going to a half or the magnetization is going to zero. Okay, but you say what about this guy or still this guy. So this is another property and this is actually where most of the proof goes, use the fact that this is essentially a mark of random field. It's not exactly mark of one of them, but it's essentially a mark of one of them. So the mark of random full property for those of you do not know says that conditional on this ring that separate this guy from this guy, the two colors are independent. So the additional information that I have in this guy doesn't give you anything. Okay, so you see a very, very clear connection to the broadcast at three more than one of course the physicist where they were of this connection. I mean, some of the miss the serum the serum is pretty easy. Yes, I should use that I should. An important fact is that this uses this paper that we talked about of Evans can and Kenyon person shulman, which talks about the situation where the trees not exactly regular not all of the degrees is the same but the average degrees. So, you can all of the results are increasing that where the degree is exactly the K over to branching process is where the average degrees is. So I want to tell you a little bit just in a few in a few words a little bit about the other direction that if the square is bigger than one than it is best possible to detect it's best. It's possible to infer this partition better than random. And I want to know much about the pool. Yeah, sorry. Yes, that's to slow down. There's a question in the chat of why it's a tree. The. Thank you. Yeah, this is a good question. I mean, it's something that one has to prove, but it's very similar if anybody if if for those of you have seen random graphs before. When you look at the sparse random graph and the average degree these are constant. They're essentially in no short cycles. Right. So, you know, things are free right if you just try to think about the probability that you will have a triangle. You see the probability that you'll have a triangle around and you know this guy to be pretty small. So, so it's the same argument here for for the training and the gas when there's sparse right so the key the big key factor is that D is some constant. So if you want to think about it in terms of GNP the probability of an edge is going to be one of two numbers is going to be either D plus some constant over and or the I don't know how to call this constant or the minus a constant over. Okay, but this is still just, you know, it's a very sparse. Thank you for the question. Okay, so what's the, the more exciting result is that even if that square is greater than one I mean this was proved by us and by Laura must rely independently, then it is possible to detect I'm not going to tell you much about the proof, except a previous attempt at the proof which eventually worked by another work by my son and collaborator. And I just want to highlight the connection to believe propagation. Now, after we started thinking about it we actually wrote a joint paper with the physicist that it not a proof of this conjecture, but it says the following. If you look at a which is the agency metrics of the graph. So it's very common to use spectral algorithm on matrices in order to find partition. But instead of looking at the matrix area going to look at this matrix, which is a two and by two and matrix with this weird form. Then the second eigenvector of this matrix is correlated with the partition. Okay, so you should also I see the few faces of you that are staring staring at this matters you say why would you come up with this matrix and why these the vector of the group of all the vector why would you come up with this matrix. And what I want to tell you is that, you know, the reason that we got this matrix is that we linearize this belief. And if you insist, you know, like I said belief propagation, even when you don't have a tree you can pretend that you have a tree. You can do the same recursion that you did on the occasion that we seen last lecture, even though it's not a tree, you apply this recursion over and over again. And you know you get some probabilities and you can classify according to this. What we did is we instead of walking with belief propagation we look at this operator that non linear operator that maps probability to probability we linearize this operator by linearize the this operator we got this matrix and then they said okay it's this matrix that is the story. And as I mentioned before this conjecture was proved, you know by a border naively large and muscle a couple of years after after this question. So it's another instance where you see me live propagation you still linearization and this gives you sort of the right, the right answer. Okay, so maybe I will conclude this part just from beautiful picture of that, you know, in this paper with the physicist of how does it actually look so this matrix is not the kind of matrix that you used to it's it's not normal it's it's not symmetric so the spectrum is interesting right so the spectrum is complex. There's always this eigenvalue which corresponds to the average degree this is not the interesting eigenvalue this is the interesting eigenvalue the eigenvalue. And the corresponding eigenvector that corresponds to the partition so this is from a sample from this model. And our friends are you know also applied it to a bunch of actual networks. And in many of them the real eigenvalues outside of the circle of radius square with the average degree seems to correlate with communities in the data. Any questions about this part before I start talking about the nonlinear. It seems people are ready for the nonlinear. And this you know, we have this interesting models, you know, which have some hierarchical connection between random variables, which can be inverted via disbelief propagation which is this nonlinear recursion. And it's not the special for phylogenetic reconstruction, and it also gave us what the special for this position, this one of partition of the block model so we somehow by linearizing stuff we got the right answer which is good. So I'll tell you in advance that, you know, in the nonlinear case, we know much less right so you know there's going to be a lot of open ended thing here, but I'll tell you what we know. So the answer here is going to be large Q and I'm going to talk specifically about what what is large Q is maybe in the next lecture but for now just to skew that Q is equal to a half. Okay, 20 or something I mean it doesn't, it can be smaller than that but you know the number of colors is bigger we have exactly the same process we copy this public data we don't know my otherwise but you know that's going to set up the. So the first thing that I want to mention is larger Q make things easier in some sense not harder. So everything that we knew how to do before we can do now. So this is the claim for all Q, if the data square is bigger than one if you're above the customer's thing about for the three broadcast model, the magnetization can distinguish this mean magnetization doesn't go to zero. The magnetization is strictly positive. Okay, so you know if I run this process on the tree I look at many level. I try to I do believe propagation I try to infer the root I get the random variable that's not. We can detect in the block model we can find partition that's better than the, you know if Q is 100 better than a random partition to our blocks, and we can recover phylogeny is all this three structures from sequences of one length or the log. So everything that we knew how to do before if the data square is bigger than one we still know how to do now. Okay, we haven't lost any. So I'll give you a proof of this claim actually, but I'll just do it for even you because I'm lazy. So here's the proof for even q it's a one line proof. So what I'm going to do I'm going to divide the Q colors into two sets of size q over two my hundred colors I'm going to divide into two sets of size 50 colors. And I'm going to say, every all this 50 colors which a partition arbitrarily they are going to be plus, and this 50 colors I'm going to be minus, and you say what does it happen what does it do to the probability distribution so you have to think for a second but you see that plus go to a plus. So you have to do the data you want to my what you did before you still do, right, you know when you copied, you're going to stay in the same class, where you haven't copied your under my so you're still going to run them. So you have to do a real reduction from the case of even q to the case of two, and it allows you to apply exactly the same algorithm that you apply I mean it's just exactly maps the measures in it in a way that the distribution are consistent that lets you do one to answer. Okay, okay, but what about odd q. Okay, so for odd q you have to walk and we want to do the work right so so like this is something that I maybe we mentioned in the answer to question in the previous lecture. More generally this is true for a broadcast process with Markov chain and edges. So when you have the same Markov chain on every edge. So the second is the second eigenvalue so it's the maximum absolute value of everything in the spectrum of M a became this is M. Sigma M. So this is the spectrum of M. So you look at the second absolute value of magnitude in the spectrum, the largest one is one. So that's, and you're going to call that data and everything is all the results that we know. So for the broadcast stream models, you know you actually have to go through Kestan and stigrum for the block models you have, you know, border navy large and muslin and Abbey and Sandon work pretty hard to do it for phylogeny and we see in the joint work with Sebastian Roche and Alan Sly we did it so you can do it it requires work it's not straightforward right but it's something. So what we can do before we now do so now we're going to be more ambitious. We're going to ask can we do what we did before even if the data so maybe the next question. Next. Can you do it if, or for some data for which the data square is less than one. So this may be the next question that you want to ask right so above the custom stigrumbar whatever we could do before we can still do what about below the system. So, here is the results of for large Q again, think about q greater than 100 and I'll give you more detail later. So, a data queue, such that if the such that this data queue is below the custom stigrumbar so the data queue score is less than one. Okay, so maybe a little interval here. This is zero. This is one. This is data Kestan stigrum. And now there's going to be this data queue. So, if the number of data is bigger than data queue for the three broadcast model you can distinguish you can do better than you can detect in the block model, and you can recover phylogenies from sequences of lens order log in. So everything that we could do in the case where Q is equal to above the custom stigrumbar and in that case, we knew that we cannot do better. So, we know that we can do better than we can detect in the case where Q is equal to above the custom stigrumbar. Okay, but there's a twist. What is the twist I'm going to tell you what are all the twists. But you cannot do it using linear estimators. Remember, one of the nice thing about the case Q equal to we analyzed belief propagation and we analyzed another algorithm which was the majority algorithm, and they both gave us the same pressure that we were happy. But for large, you this is not the case with linear estimators you can only do things above the custom stigrumbar. You cannot do it with robust estimators I don't know if I'll have time to tell you what robust estimators are today but you cannot do it with robust estimate right so somehow your reconstruction procedures has to be much you can do the loop propagation which is this non linear iteration the front of the ribbons, and this would work but you know if you wanted to do something simpler simpler things are going are not going to. You can detect in the block model but what is going to be the twist. So this is a twist that's not known to be not known formally, but it's believed to have a computational statistical gap. So this is, you know this conjecture was made many times I think I'm citing these papers as papers that gave some evidence of that but it goes all the way back to the physics papers. So what does it mean to have computational statistical gap. So people have algorithms that can detect better in this interval below the custom stigrumbar but this algorithms take exponential time. It's not like you know some spectral algorithm but you look at the matrix computing eigenvector you know this algorithms like, oh I'm going to enumerate all the role partitions for each partition I'm going to decide and so it's something that definitely nobody will apply in part. So you can recover phylogeny from a for sequences of all the loans log in. So this is not written. I mean, for one of the reasons that it's not today is that in the very very simple case that I told you right now. So, why is it not written so if all the others are the same. So I'll try the same algorithm that we've seen today. I'll recall what's the same algorithm that we've seen today. You find for each guy what's the most correlated other leaf. Now, but now you are not allowed to take majority of going to do belief propagation in order to estimate the rules. Okay, doesn't matter. But the conjecture is again that it cannot be done robustly what does it mean it cannot be done robustly. It cannot be done the setting that it's interesting for biology. So what's the setting that is interesting for biologists that the data that are not all this or the trees not exactly fully balanced. So I think just for this lectures, I finally found two formal formulation of this conjecture that that maybe I'll give in the next lecture. So what happens is somehow in the linear case, you know, we had very robust simple algorithms or influence procedure that did what we want. They were optimal in the sense that when they worked all the way to the custom stick on bottom above the custom stick about nothing works. So until the custom stick on bottom you could do whatever we did before, but above it we have this influence procedure that sort of nasty. They have to be nonlinear. You know, the only way we do know how to do the block model is we have to do something that's running exponential time. The only way that we know how to do phylogenetics is to assume a lot of unreasonable assumption. So somehow you get this feeling that things are more delicate and hard. Okay, so maybe in the next few minutes I'll just give you a little bit of a feeling of, you know, the fact that we know that things do not work. In this lecture, you know, maybe we'll start with a few minutes or the first few minutes of the talk I'll tell you a little bit more about the, what some of the nonlinear things that we believe or conjecture in particular I tell you about this to connect. So I'm going to tell you about two results that shows that, you know, if you want to do something robust in some sense, or something non delicate then the custom stick on bottom is the right bound even for large q. So I'm going to tell you about two theorems. One is a was a was joined to see what person one was joined joined or this is me actually joined with joined with with with slanted Johnson. Okay, never mind. But this this is the following so we are in the key we are in the just the broadcast process so we have one tree. So, large. Q is large. I'm running out of the big case. So I'll try to do it some of us. Okay. So Q is large. And I'm going to talk about a new a new notion, which is called country construction or sense this reconstruction. You want to do the same thing that you did before. Here is xh. You want to say something about the root x not, but I'm going to limit your power somehow. How am I going to limit your power. Instead of giving you xh. I'm going to give you this information CHV. So what is CHV or CH a CH a table for each color. How many indices of xh are equal to a. I get the count of xh and again the count of xh know xh is a two to the h dimensional vector xh is a q dimensional vector just tell me in the vector xh so many coordinates will color one so many coordinates will color two so many coordinates will color three and so on. So what does the theorem about the country construction of says reconstruction says it says that if the only information that you're given is not xh, but just the number of colors of each type, then the cast and stinger bound is the right pressure. Let's think a little bit what it says, some of it says that what belief propagation is using, because it's working above the cast and stinger bound in this case. It's not just using how many are there of each particular type. It's also using where are they. It's not using the fact that there are so many blue so many red so many yellow so many green it's using something like, oh, this blue here and there's another blue here and the green is here is it uses the more delicate information about their location. So that's one sense in which in which in which, in which you need in which, you know, if you want to do something robust then you can only get to the cast and stinger bound. So in the notion which I'm going to talk and I think I'll finish with this notion of robust reconstruction. So in robust reconstruction, you know what you're given. Maybe I'll draw the picture in robust reconstruction. I'm going to fix some, I'm going to fix some noise level eight I'm not going to do is I'm going to do exactly what they did before there's the root x not then there's xh, then I'm going to have a vector yh if you want. What is going to be the vector range is going to be exactly the same as xh, except I had a little bit of noise so for each coordinate independently. Let yv equal xv with probability eta and yv is otherwise uniform at random for some fixed eta. So I'm introducing additional level of noise, which might be much bigger than the original theta of the tree right so it might be randomized much better. So actually giving you the leaves I'm giving you a very, very nice, the very, very noisy level of the list, but all of the process until the leaves I haven't touched. Okay, and the result that we proved is from the gentleman is for robust reconstruction again the cast and stinger bar is the right question. And this means that somehow what belief propagation is using it's not just using in which direction you are tending to be which color it actually wants to know that it's very strong that you're going in this. Okay, so I think that's all I'm going to say today. So I'm going to start with next time I'm going to start with this slide I'm going to spend maybe 10 or 15 minutes hopefully more nonlinear theory. So I'm going to start with this year and that to some extent for large you the right pressure is d theta equal to one. If the data is greater than one not the data square but the data is bigger than one, then for large enough q, we can distinguish the root better than random. So, you know what was before the data square now becomes the data. And again, but this is using this much more fragile location based nonlinear algorithm right so this is in the limit and I'll tell you also about some other results and conjecturing. Okay, so I think I'll stop here for today and I'm happy to take questions for as long as people have questions. Okay, so let's unmute all the participants so we can. Okay, we can take more questions now. We can take more questions now. I don't see at the moment any in the chat, but until one comes. I just wanted to ask about the block model results. Did you always assume that the graph was sparse or no. Oh now I'll finish muted. Okay. No longer. Am I muted or not. No, no. Okay. So, a, yeah, there's a very long history for the block model. I mean, definitely the earliest result in the block models were in the case. So, let me just give you a little bit of context for the block model. I studied the questions of partitioning block models for at least since the 17 and definitely the first works in this way in the dense case and the connection to under matrices and rank one perturbations, you know, go way back. You know, if you want the state of the art until this work with the linearization of BP was to use the results from random matrix theory or perturbation from the matrix to see what you can do what you can do there. So, you know, we are aware that some of what happens to the spectrum of the usual operator is that there too much influenced by the high degree notes. So, somehow the high degree notes, you know, kill all the user aspect usual spectral methods that we use. And that's the reason that there was a need for something. Definitely the results about the dense or case or the average because the log and go all the way back to the 90s. Thank you. Other questions. So for the robust reconstruction, are you mainly into, is it mainly understood when theta is much smaller than when it is much smaller than theta. Yeah, so let me let me say very good. You, you, you, you, you ask the right question. I was a little careless here. So, the way it works is that you fixed a data after you fix the data. So you give me the data square bigger than one. I'm going to find the noise level a data so that for this noise level you will not be able to reconstruct. And the noise level is going to be much, much higher. But it's only at the leaves. So the statement is that if data square is bigger than one in one direction with details. It says that if the data square is less than less or equal than one, then the existing eight are greater than zero such that or between zero and one such that with this noise level eight, you will not be able to reconstruct. This is a sympathetic in the level of the three so the eight and the important thing is that eight that does not depend on how many levels. Okay, yes, only only on the Markov chain and the data. Yeah. There's a question in the chat form the tool. I'm going to explain how is giving the count providing lesser information isn't even the same information as in the unknown permutation case. Okay, good. So let me let me help clarify what's going on here maybe I'll try to find an empty page. I'm trying to dispense a lot of empty pages so I don't have to add a page to my notes. But okay, so the annotation case was a different problem. In the phylogeny I'll just I'll just try to sort of so in the in the broadcast model if you want. And the goal is to estimate if you want goal is to estimate. In the phylogeny case. There's an unknown permutation. But there are independent samples from the process. And the goal is to do something else. The goal is to recover the tree. And you may ask what happens if you are given a single sample. And you don't know the structure of the tree. And this is a very good question, which I haven't thought about it until you asked. It's completely clear that in this case you get something that is permutation invariant. Because the distribution of what you get is not permutation invariant. Is it permitted it is permutation. So, so in this case everything is good. In this case everything is permutation invariant. And the only thing that you can do is you can do counter construction. By definition, you know what you you're given with a random permutation. So you get something that permutation invariant so all the information that you have is how many do you have it in each type. And therefore the only thing that you can do is counter construction. And therefore for this problem when you don't know the truth. Then the case bound is case bound is always the bound. Always tight. And therefore every q Okay, so the the important difference between question number one question number two I'll just repeat it in the broadcast model we know the structure of the tree. We are given xh, but we know that these two coordinates of xh are sister coordinates. We know we are given xh we don't know what's inside the tree but we know the structure of the tree. And I guess the question that you ask is what happens if you're just given a single sample and we don't know the structure of the tree. If the meaning of not knowing the structure of the tree is what I said it's you know that you you, instead of your actual sample you you know you apply unknown permutation of the leaves. You can't do anything better than the count and the for the count the questions to go on this side. Does this make sense. Yeah, so he says yes. So, questions in the chat. So, for the review session is Frederick here was it. The color. Yeah, yeah, is he here. I look, look him up and see. No, on the participant list. I don't see him. So I'm not sure. I'll leave him to check. I'll leave me to check. Yes, so. So the review session is supposed to start at half past and doesn't have to be here right now but it would be good to be early. People who are interested in this can either stay or often reconnect later, it will be at the same link. That's good. Okay, but I'll stay here and keep the room open so I can see you. You don't have to show. Yeah, I think I'll go. See you tomorrow. Thank you. Thank you. Bye. I'm going to stick around to see if Frederick shows up. I sent him an email yesterday. I'm going to give him a brief explanation of how we run this. And he replied. Nope, no reply yet. Okay, that's. Okay. Okay. Okay.
There is a very simple algorithm for the inference of posteriors for probability Markov models on trees. Asymptotic properties of this algorithm were first studied in statistical physics and have later played a role in coding theory, in machine learning, in evolutionary inference, among many other areas. The lectures will highlight various phase transitions for this model and their connection to modern statistical inference Finally, we show that perhaps unexpectedly this "simple" algorithm requires complex computation in a number of models.
10.5446/55679 (DOI)
So we're welcome to the third lecture in the sequence of lectures by Alkanan Moselle on Simplicity and Complexity in Belief Propagation. The lecture will be recorded so if you do not want to appear on the screen please keep your video and your audio off. There will be an opportunity for questions both through chat which will be posed intermittently to the speaker as well as at the end when we stop recording the lecture. All participants can unmute themselves and ask further questions. Today after today's lecture of Alkanan's lecture we will also have a lecture half hour later starting at 1730 UTC. We will have a lecture by Shorish Enduganguli on large deviations of sparse random graphs. His lecture series will continue on Thursday at this time so at 16 UTC and on Friday as well at this time. There will be another exercise session for Alkanan's lectures tomorrow after the first lecture so at 1730 UTC. To start with the lecture I see Alkanan has already shared the screen. Thank you very much. Thank you for the survivors for showing up for the last lecture. You made it to the last lecture. You should congratulate yourself. I don't know maybe if the number is small maybe you should make it that everybody can unmute themselves or you think that's a bad strategy. I don't know. I feel like it is a small enough group that maybe we can do that. But of course then we will hear somebody washing machine and that person won't be there so I don't know. If you want to do it I do not. So in the last case if we hear the washing machine we can mute everybody. So last time I want to understand the nature of the more fragile phase transition that we have for large Q where the theory is nonlinear and one of the theorems that we mentioned last time is that for every Q no matter how large Q is if we look at this process on the DRET3 where we want to understand what can we say about the root from the leaves. So we have this X0 here at the root and then see the information about on XH and someone want to understand what can we say about X0 given XH. So this is the threshold for sensory construction or counter construction. So this means that instead of X of H you are just given how many other of each type. If you get information that it's permutation and variant and for bus reconstruction there's additional form of noise. So we won't talk about the proofs of the theorem. No. What I want to talk about is today is to start with the sort of the opposite direction and just to give you some intuition for how can we prove a theorem like that. So I give you actually the easiest theorem in this direction. It's easy because we allow Q to be arbitrarily really large. So here's the theorem. If d theta you fix dn theta such that d theta is greater than 1 then there exists a Q theta some finite Q theta such that if the number of color Q is bigger than Q theta then if we look at this random variable which is the expected value of the root. So this should be XH given XH. So this random variable is a non-trivial random variable. It's not always 1 over Q 1 over Q 1 over Q. It has variants which means that you can say something non-trivial about the root. Yes, again, everything is in the limit as H goes to infinity. So that's what we want to prove now. So the order of quantifiers is a little maybe cheating but I think that's the easiest proof and maybe the first proof in this direction. So I mean we'll do that. So first of all I fix dn theta such that d theta is greater than 1 and then I want to show that for large enough Q if the number of colors is large enough I can infer something about the root from the leaves no matter how deeply. Okay, so let's do the proof. So what's the easiest Q for which it would be to do this proof? So the easiest Q would be the largest Q. What's the largest number you can think of? Anyone? Say again? Five. Okay, very good. I can think about a larger number. So I will choose Q equal infinity. So Q equal infinity, we didn't define this model for Q equal infinity so let's just think what would this model be for Q equal infinity. So let's try to think what would it be. So I still have infinitely many colors so I start with one of them. It's a little maybe hard to imagine what does it mean to start with one of them uniformly at random so that's not to worry about it. So I start with this color and then here I copy it and here I decided not to copy it. What do I do when I do not copy a color? I choose another color uniformly at random so we don't really know what this means either but it definitely means that it's going to be a color chosen differently than any of the colors that we've seen so far. So here we're going to call maybe we're going to call this color one even though it's not necessarily the first color and here we're going to give color two and then maybe I'm going to copy so this would also be one, this would also be two, then I'm going to choose a different color and this is going to be three. Here I'm not going to copy, not going to copy and I'm going to have four and five but you know these labels are arbitrary, right? They don't really mean it. So in this case whenever I change, I change to something that I haven't seen before. Okay, so that's the basic model and I claim that clearly, okay, clearly in this case the threshold is d theta is equal to one so if in other words what I claim here is that if d theta greater than one then xh tells tells me something about the live or gives information, gives information on x naught. Okay, so let's try to understand why, okay, and it's two asymptotically, right? So obviously, you know, if you have one level, you can look at any other colors that you have in one level, it's pretty likely to be good. So, okay, so what is d theta equal to one? d theta equal to one is obviously just the the branching process threshold here and let's see how is, how can we argue that based on branching process argument, right? So here's what is my claim, I just know it in the picture. So what would be a good situation for us? A good situation for us and I'm just going to do it for d equal to so the binary is that I copied here and I copied here, this obviously happens with some probability theta squared and then what I want in the rest, I don't necessarily want that I copied everywhere, I just want there's one path from the root of this tree to the leaves that where I always copied. Okay, so what is the construction again? So these edges that I mark are edges that I copied, let's call this color star, I have a star here, I have a star here, I have a star here and all I want is that from this point on there's exactly at least one path that gets to the leaves of this object. Okay, and I don't care about anything else. So what is the claim here? So there are two claims, the claim, first claim, claim one in a configuration like that, I know that the root has to be star. When I mean in such configuration, maybe I should say in such leaf configuration. Okay, if the leaves of the tree are such that in this sub-tree there's a star here and in this sub-tree there's a star here, I know that the same color cannot appear two times in different ways, so it must be the case that they emanated from the same place and this place has to be the root. Okay, so if I have a star here and a star here then the root has to be star and claim two, the probability of this configuration is at least state, is at least state a square, that's the probability that they copied from here to here and from here to here times let me call it P branching process squared, so that's the probability that the branching process from here survives. Right, if the branching process from here survives then I will have at least one guy with star here and I'll have at least a guy from star here and of course the key fact is if d theta is greater than one then we know that the branching process will survive is the positive probability. Okay, so this is sort of cheating, this is for q is equal to infinity, but still there is a lesson here, the lesson here is that it's actually not something about the numbers, the number of stars that I'm using here is just two, if you look at any color that appears on the tree you know it appears many many times, so it's really not about the numbers, it's about where I see the colors, right, it's something about their location and you know the key fact is that they see the same color on the left and on the right and it's very different from the country construction that we've seen before, so that's maybe it's not about the, how much it's about where the star is. Questions about this proof? Okay, good, so now how do you do the general, the other, the finite q case, okay so let's just do, I'll just indicate and again give us an exercise for a finite q again let's look at d equal to and fix d such that d theta is greater than one and again for the q we are going to fix later, we are not going to commit it on what's the value of q, we are going to try to do something similar, so I'll tell you what the something similar is, it's a little more complicated, so here's my big treat, here's my treat t, I'm going to infer the root color to be c, if there is something that's called L diluted binary sub three, t prime up to t with the root at zero and where all the leaves have color c, what is L diluted tree, so this is a binary tree, two diluted tree means that I'm allowed to skip a level, so you know maybe I should, okay let's see if I can use a different color, so I'll draw a big binary, I'll draw a big binary tree here, here's my binary tree, okay so what is a two diluted tree, let me try to paint in this a different color, so two diluted tree maybe it's this node and this node so it means that I'm allowed to skip a level and then maybe from this node I go to this node and from this node I go from this node and from this node I go to this node and from this node I go to this node, so it's not a binary tree, I'm allowed to skip every two levels and what I'm saying is my procedure is that if this color and this color and this color and this color in my leaves are all going to be red I'm going to declare the color of the root to be red. OK. And I can do it for more than two. So this was 2-day-looted. Here, L was equal to 2. I can do it for three levels, four levels, and so on and so forth. So I'm going to fix the parameters of that later. But the basic procedure is going to be, it's again going to be something based on location. But instead of just having two guys, it's going to be more fractal. I'm going to look for a fractal-like structure where all the colors in this fractal are the same color. And if all the colors in this fractal are the same color, I'm going to recover the root to be that color. OK. So that's going to be my inference procedure. OK. So what are the exercises here? I mean, so now, you know, these are really just exercises in branching processes. So I won't do them. So the two exercises are the following. So there exists, if again, so let's just remember the order of parameter, what I started with is, what I started it is I fixed d and theta such that d theta are greater than 1. I didn't say anything about q yet. So the first exercise is that once I did it, when I fixed and d theta such that d theta are greater than 1, there exists an ln and epsilon greater than 0, such that if the color of the root is c, the probability that such a tree exists is at least an epsilon. OK. So you will see structures like that. And maybe the fact is that ln and epsilon do not depend on q. It's a uniform bound, something that's for l, you will see this structure. So you can do this inference once in a while. Now, this inference, unlike the previous case, it's not necessarily the case that if I see such a fractal, it means that the root is that color. So I somehow have to make sure that when I infer, I don't make mistakes. And for this, we need the second claim. And the second claim, we'll use the fact that q is large. So the claim is that for all epsilon greater than 0 and l, if q is sufficiently large, so q has a function of epsilon and l, and if the root is colored by a color different than the c, then the probability that there is an l that is looted to the l minus 1 tree with all the leaves of color not equal to c is at least 1 minus epsilon. So this is a much stronger claim. What do I want here? I want here that when I draw my picture, so I draw it in blocks. This is the first l levels. This was not color c. I'm allowing just one guy maybe to have the color c. Everything else has to be color not c. And then recursively, I have another l levels. So below this guy that is not colored in c, I allow maybe either none of them are colored c or at most one guy does these colors. And if the number of colors is large enough, again, it's a branching process argument. You can show that such a big tree exists where all the colors are not. Now, why are these two claims useful? I mean, I won't show you in detail, but it is a combinatorial exercise to show that if this tree exists, this tree doesn't exist, and vice versa. So if you have this fractal of color c, you cannot have this fractal where everything is colored differently. And so one exclude the other. So this would say that you would not make mistakes too often. If the color is not c, you will most likely not c. So again, the interpretation of exercise one and two. Exercise one says, if the color is c, you are going to say c, at least with stability epsilon. And the second exercise says, if the color is not c, the probability that you will say c is at most epsilon over t. So for the exercise two, are you looking for a tree where all the leaves have the same color that is not equal to c? No. The only requirement is that the color is not c. And now I'm just thinking in a very simple, simple mind that why it's either c or not c. And I want essentially everything to be not c in this fractal sense. And Ratoul was asking to verify that the leaves of the diluted tree are all on the same level. Are they all on the same level? They're all at level h, which I assume is a multiple of l. And in exercise one, you're not specifying the level? It's again, it's an exercise one. h is again a multiple of l, if you want. But it's true for every h. So again, the epsilon is not dependent. No, not dependent h. Neither on q or on h. So it's a property of branching process. I don't know if you've seen this property before. But for example, if p is large enough, and I just look at the say four regular tree, if p is large enough, then this poor, if p is large enough, I can get a three regular tree where all the branching happens, where there's always branching. So this is a similar phenomenon. So the critical value just for percolation or for branching process here is one quarter, but if my requirement is much stronger, it's to find a three regular tree inside a four regular tree where everything is open, and for large enough, p, this will happen. And you write similar recursions to the recursions that you usually write in branching process for this condition, and you check, you get some fixed point equation, and you check there's a root that's less than one. That's how I mean it's something classically in the theory of branching process. Maybe not all of you have seen it, but it's just, it's classically it follows the same proof that people usually do in classical branching process. OK, any other questions about this? OK, good. So there's another exercise. I mean, this is the easiest exercise in this percolation picture is that if d theta, I don't know why did I answer that, if d theta is less or equal than one, then the root and the leaves are asymptotically independent for every queue. And that's sort of clear, right? Because again, from this branching process picture, if d theta is 2 theta in our case is less than one or less or equal than one, then we know that this branching process will die. So we will not reach the leaves, and therefore, what we see at the leaves, then the root is going to be. OK, so that's all I wanted to say about this simple reasoning. Of course, if you're smarter, you do better. So Alan Sly and this found a different way to think about the correlation between different parts of the tree by doing a more careful expansion of the magnetization. OK, we have to say what is magnetization, but the magnetization is sort of the expected value of the root given the leaves, right? So mn is in some sense a vector version of the expected value of x0 given xh. And what you showed is the show that if you do the expansion as a function of both d and q, you get the main term, which is the Kesten-Steegum term, d theta square. Then you get the second order term, where you see you get this mn squared. And the key thing about it is this comes with a positive or a negative sign, depending if q is bigger than 4 or less than 4. So if q is bigger than 4, this term is positive. And if q is less than 4, then this term is negative. And there's a bunch of things. Once you do that, it's not completely trivial. But Alan, so again, this is positive if q is bigger than 4. But once you do that, I mean, what Alan succeeded to prove is the following. First of all, you can prove that if q is greater or equal to 5, then the Kesten-Steegum bound is not tight. And the intuition is not too hard. The magnetization will go down and down and down and down. But then at some point, it will start going up when you're very close to the Kesten-Steegum bound. So that's not too hard to derive once you have this recursion. And also interestingly, you show that if q is equal to 3 and d is greater than 1,000 or a million, then the Kesten-Steegum bound is tight. So this requires more work, because you have to show that the magnetization quantity is low enough. And I think Alan also has a result for q equal 4, where he used the next order term. But I'm not sure these are published. So let me not tell you what they are. OK, so then it depends if it's ferromagnetic or anti-ferromagnetic. I mean, it's even more than it is. OK, and for the few of you who went to the exercise section yesterday, I mean, one of the things that I don't know if they had a chance to get to or not. So for general Markov chains, if you look at not just this copying process, you can actually have the second eigenvalue be 0. Yet the root and the leaves are not independent, right? So you can get the sort of no relationship between theta, which is the second eigenvalue of the matrix, and the question if the root and the leaves are independent. So for counter construction, lambda 2 equals 0 means that no matter how wide is the tree, if you just look at the census, then it's independent of the root. But if you are allowed to do whatever you want, you're allowed to apply belief propagation. Then there are even examples where the second eigenvalue is 0. So you really forget information very fast, and the root and the leaves are not independent. OK, there's an exercise here in the notes that does that, but there are other examples even more sophisticated. OK, so let's see how much time we have. So maybe I'll give you the conjecture, and then we'll take it two minutes back. So for those of you who stayed, maybe you're interested enough to actually think about research conjectures in this area. So these are two conjectures that I think I'm formally making for the first time ever. So you may be, in particular, it may be pretty easy to refute them. I think that proving them is going to be difficult. I'm going to tell you two conjectures about the fragility, another interpretation of the fragility of the Kasten-Stingermann in the setup. And you are welcome to think about it. It's definitely things that I'm thinking about in some way or another for many years. So we are going to consider a model where not all the tatas are the same. So there's going to be a tree. Maybe the tree is going to be binary, but each edge is going to have its own tata. So that's the model that we're going to consider. And we are going to consider a large queue. It's going to be large enough in particular. There's an interval between what I call theta r. It can be any number that you want. And the Kasten-Stingermann bound, where the root and the leaves are not independent. So the variance of the conditional expectation of x not given xh does not go to 0. We know it actually has to go to a limit in this case. So we're looking at the interval where there's some action. So this is in the case where all, so this is in this case, theta e is equal to theta for all e. So we are looking at the case where you can do the covered root better than random, but you cannot do it in these robust ways. So here's two conjecture. The first conjecture is that there is no estimate or f such that f of x of h and x not have non-zero or non-agglodiable correlation for all the models where different edges, all of them are in this interval theta r to theta ks, but they are not the same for different edges. So you assume that the theta e is known so the estimator can depend on the? No, exactly. I can, I assume that the estimator does not allow to know what theta e is. So again, there's actually two excellent questions. So there's actually two questions here. I actually don't even know the answer for sure if you know the theta e. This has to do with some monotonicities of this model. But I think this model should be monotonic, and that might not be too hard. So maybe a question, a preliminary question that you want to show is that if theta e is bigger than, I don't know, theta r plus epsilon for all edges, then x not xh and x not are not are correlated, are significantly correlated. Or asymptotically correlated. OK, so that's definitely one thing that you would want to check as a prerequisite for conjecture 1. But conjecture 1 talks about the situation where I don't know what are the theta e. I'm just promised that they are all in an interval. You can choose theta r to be any number that you want below theta ks. I'm just telling you that there are in intervals between theta r, this theta r, and theta ks. You don't know what they are. And I want a universal estimator that does not depend on the values of theta that will give you correlation. Does that make sense, Omar? Yes, and is there any reason to require that the thetas are not larger than theta ks? Right, so there isn't a reason, very good. So only it's already thinking about the power. So what we know, and I mean, so unfortunately, I think I only wrote the proof for q equal 2. We know that at least for q equal 2, if theta e is bigger than theta ks plus a little bit plus any epsilon that you want, such f exists. I meant for the conjecture if some of them are smaller than theta ks and some are larger. Yeah, but I mean, it doesn't really. Yeah, you can allow yourself to do that, too. I mean, it would still make the conjecture interesting. I agree. But sorry, I didn't answer your question, but I do want to know that for the case that q is equal to 2, if all the theta is bigger than theta ks plus epsilon, then there is such an f. And it's not completely trivial f, but such an f exists. So there is a function that doesn't need to know the theta is and recovers the root in a way that's correlated with the list. Can you not see what I write there, Lea? No, I can see it, but I'm wondering when theta is allowed to be variable, do you always know what the theta r is? No, so this is the question that I let me repeat. So let me just try to maybe motivate it a little bit. So in application like phylogenetic reconstruction, it's not reasonable to assume that you know the data. But it is reasonable to assume that the tesa line line is relatively in an interval, we'll say, the close to 1 or much bigger than the kestan stinger bound because you know how much time evolved between species. So making an assumption like theta is all much bigger than theta ks is a reasonable assumption. But making the assumption that you know what they are, that's a more tricky assumption. So in some situation, and this would lead to the second conjecture, in some situation, you want to make this conjecture. The theta is, and then you want to reconstruct it, that does not depend on knowing the theta e. And in the case that q is equal to 2, such a procedure exists. What I'm claiming is that for larger q, it should probably also work. I never checked it. If all the theta e's are bigger than theta ks plus epsilon, I mean, in the even q case, it follows from this general philosophy that for never you know for q equal to, you know for even q. So maybe there's a little bit of checking it for odd q. But I think the fragility comes where you are below the ks thing about. So your tethers are such that if you are given the tethers, you can apply belief propagation that takes into account in a very crucial way what's the actual value of theta e is in trying to recover the posterior. And you will do well. But if you don't know the theta e, there isn't like an algorithm that ignores the detail of the process and recovers the. But do we know for sure that theta r is not 0? We know. If q is, you mean that the interval is non-empty? Yeah. Right. So that's exactly the point of the ks thing or the result of the thing is we send it for q greater than 5. This interval is not empty. Right? Because if you're not sure, you can this interval is not empty, because the ks thing bound is lower, is higher than the threshold for actual reconstruction or for actual recovery of the root. So this interval non-empty when q is greater than 5. So this conjecture only makes sense for large q, definitely not for q equal 2, but definitely for q greater than 5. This is a non-empty interval in this case. That's conjecture number one. Maybe I'm happy to talk about this later. And maybe at a high level, conjecture number two, it says it is impossible to recover phylogenetic trees using order of eight samples under these conditions above. And the conditions above means that you don't know what the theta is. So again, the converse of conjecture one, in the case q equal 2, does not hold. And this helps us in recovering phylogenetic trees. So conjecture not only conjecture one, but also the corollary of conjecture one, the reconstruction of phylogenetic tree, breaks down because of that. OK, I'll just say because I want to take the break. So a strong version of impossible would mean there's just no information theoretical way. And weak version would mean that it's computationally much harder. And then the fact that it's computationally much harder, we can never prove. You know, you would just say, you know, the only way I know how to do it is via some exponential time algorithm. And it feels like I cannot do it. But maybe even the strong version is correct. And I didn't really make progress on this problem. I thought about it on and off for the last few months. And I didn't really make progress on this problem. Right now I have no intuition. For if the stronger version or the weaker version, maybe you can just disprove my conjecture, whatever you do, you'll make me happy. OK, so let's take two minutes break now in which you can ask questions. And then I finally talk about some elements of simplicity and complexity of belief propagation. And I'm happy to take questions. Yes, I think participants should also be able to un-mute if they want to ask questions or ask on the chat. I guess conceivably there could be some perturbative argument if the thetas are all in some very small interval around some theta. Yeah, so my conjecture is pretty strong. I didn't necessarily specify that theta is the first value where you can do something. I think that my conjecture is that actually for every epsilon that you can think about, if you look at the interval between theta ks and theta ks minus epsilon, you'll doom. So this is a pretty strong conjecture. So you might be able to refute it. OK, and Subharata is asking if there is a robust analog for the block model recovery problem. That actually sounds like a very good open question too. So let's actually think about it. Subharata has a good question, so I'll see you, Al. So let's just think about it aloud for a second. So for the block model, it's already conjectured when q is large. There's a computational statistical gap. But now Subharata is asking maybe a different question. Suppose we are looking at the block model where the parameters, the tatas, are different for different edges and you do not know them. You just know that they are in some interval, in this interval, if you want. Do we know that this is information theoretically impossible? So this is a great conjecture. So let me call it S conjecture. So it's a very similar information theoretical phenomena in block models. So I think this is a great question. I don't know. I don't know that anybody looked at this question. I think it is a very, very pretty natural question to look at. So that's a very nice question. OK, good. So let's move on to the last part of the talk. So obviously I have 20 minutes. So I won't prove anything. Maybe I'll state a couple of things. Maybe not. We'll see how we do. Maybe I'll just tell you stupid jokes. I didn't tell enough stupid jokes in the lecture series. So I'm going to talk a little bit about the complexity of VP. So the question is, what is the complexity of VP? We've seen a lot of VP in this lecture. What is the wake formally measure? What is the complexity? So in some sense, it's slow. We've seen this recursion. It's running the volume of the tree in linear time. With the height, you start from the leaves. You compute the real number. Then you compute another real number. Then you compute another real number. So maybe it's very low. But I mean, there are some notions of complexity. One is that it uses real numbers. The process that we talked about, say, with Q equal 2 is discrete. It just beats. Why is it that when I do the inference, I have to use real numbers? Is it really necessary? And the other question is that it uses depth. It's this recursive procedure. So is there an easier? So it's different than just summing the guy. You have to do something iterative. Is it in some sense necessary? And the fact of the picture that we've seen in the proofs for the behavior above the custom stick or bound suggests that maybe depth is needed. So here's my stupid joke. I do one stupid joke. What is everywhere and understands everything? So what is everywhere and understand everything? If you Google something like that up, you'll get omnipresence. So this is not going to turn into a course in theology. So my answer for what is everywhere and understand everything is the deep net on your smartphone that understands what you say. So each of us has a smartphone or Alexa or some other software. You talk to this thing and miraculously it understands what you said. And I don't really want to claim that anything we are doing is too relevant to deep nets. But these are some hierarchical processes that runs very fast because they understand what you said. But they're best on multilayer. So I think for me, one of the questions that came from the lack of understanding of deep nets is the following question. So one of the reasons I asked these questions about B is the following question. Mathematically, it is natural to us, given what we've seen deep nets. If we have a process that satisfies three natural properties, one is it's a realistic model of data. So you have a model that generates realistic model of data. Two, reconstruction. You have algorithms that reverse engineer the generative process. So you have algorithms that actually can maybe very heavy data and very, very heavy computation processes that can put this deep net on your phone. So this is what Google does. It takes them a lot of time and effort. And three depths, what you actually have in your phone requires depth. So if we take a very, very abstract point of all of these deep net things, as people are doing probability, what do we want? We want a probability model that generates data that looks somewhat reasonable. It's a model for which you can come up with the deep nets that will do the inference. So you can recover the net that you're looking at. And finally, it actually requires depth in some form. So what I claim is that we already looked at this model. So this broadcast process model or this tree markup chain is realistic. People are using it in phylogenetic, in information theory, in a bunch of area. We saw that you can reconstruct it. This is phylogenetic reconstruction. And given a lot of examples, a lot of examples I can build a tree that generated them. And then maybe the missing piece in this idea of trying to find this trinity is, do you really need the depth? So that's somehow an abstract point of why you are interested in this question of depth. And a related question, why do we really use real numbers? Deep net, NBP use real numbers. Why do you need to use real numbers when everything that you're talking about is discrete? So this is a high level motivation for the result that I briefly going to talk about now. OK, so I'll actually start from this problem. Why do you have to use real numbers? And one way to think about this is that what are the memory requirements for belief propagation? Relief propagation propagates real numbers going up. And already in this paper of Evans, Kenyans, Peirce, and Sherman from 20 years ago, they stated that even for q equal 2, any recursive algorithm on the tree that send messages back on the tree that uses at most some constant number b bits of memory per node can only distinguish the root value better than random for theta that is less than theta b. But this theta b is not the constant stigma. It's a bigger number. So if you want to do what belief propagation does, this simple nonlinear recursion with real numbers that we've seen, where you just multiply and add and divide, and you want to do it with bounded amount of memory, you know, you can truncate, but you can think about other ways of doing it with bounded amount of memory, then you will not get all the way to this session. You will miss them. So let me show you the picture. So we proved that in joint work with Vichesh Jain for the recolor with the TA for this class, Jingbo Liu was a postdoc here. We proved that this is correct. And I'll give you the picture from slides that Jingbo made. These are very nice pictures. Sorry, let me try to show you. So what happens when we do belief propagation? So this is not completely consistent with our notation, but there's the root x1. This is a bit. You know, there's a noisy channel that gets corrects to x2, x3, x4, x5, x6, and 7. Then the other wise, this is what belief propagation calculates. So these are some surreal numbers. Let me combine these real numbers via some nonlinear recursion. And then we get our estimate, which gives us some two probabilities. So it goes down the tree, and then from the tree, we go up the tree. So the model that we're looking at, like, no, the broadcast model is exactly the broadcast model that we looked at so far. But in the reconstruction, we are only allowing functions where each of yi's contains log l bits of memory. So it can take at most l values. We are not allowing it to take infinite precision real number. It can take only at most l possible values, or you can describe it with log l bit string. And the theorem that we proved earlier, what's the original conjecture was of if KPS says that you really need the infinite precision to get all the way to the Kastin's thing. OK, so any other procedure that you have said, oh, I'm going to look five level downs and do majority iteratively. This is going to give you something, it's not going to go to the Kastin's thing. Oh, I'm going to do belief propagation, but I'm going to discretize it and keep just 25 digits. This is not going to get you to the Kastin's thing. Whatever you are going to do, if it doesn't use all the infinite accuracy, it's not going to get you to the Kastin's thing. So it shows to some extent that you really need to use real numbers. OK, I think in the last 15 minutes, so Jingbo had some, this proof is pretty beautiful. And Jingbo, yes. And what was the estimate on theta B? Oh, what B? Yeah, it was shifted. So theta B is bigger than theta, and it has the right scaling, which is a polynomial in B. So we have both an upper and a lower bound, with different constant here, actually. So this should be theta. And one direction is obtained by discretizing belief propagation, but not in the most trivial way. So the constructive way is discretizing belief propagation, but the main work here is a lower bound. OK, so I want to talk about the proof, because I really want to tell you about a different model, which is more related to this question of depth. And here, the game is a little different. Now we still want to recover x not from xh, but before we had to do it with following this very specific tree strategy. Now we are allowed to do any strategies that we want. So we have some deep net in our cell phone. It takes xh. We have some non-linear operation. Then it goes another layer, another layer, another layer. We want actually, we look at different architectures, but what happens if we allow also non-triart architectures? And the question is, can we do it with slow depths? And of course, we cannot expect to prove that we need a huge depth, because BP is the depth of order log n, which is order h. So we cannot expect to get a lower bound that's better than order h. And here are some parameters for deep nets. Let's not worry about it. The question is, can we prove something like that? So everything I'm going to talk to tell you about is some results from a work with Ankur Mojtla and Colin Sandon on this problem. The problem is that I don't know how in 15 minutes I should show you three complexity classes in computer science. So I'm just going to skip AC0. And I'm going to tell you what TC0. So TC0 is like AC0, which I skipped. But with majority gates, so let me tell you what it means. These are logical circuits where you are allowed to do logical operations. The logical operations that you are allowed to do are n. You are allowed to do all. You are allowed to do not, obviously. And you're allowed to do majority with any threshold that you want and any number of inputs. So it's a circuit that you have some architecture. So here, all here is xh, maybe some inputs are duplicated, then a bunch of them are going to an n. There's an overlap, some other ones are going to an o. Some other subset is going to a majority, then these guys are going to c. And it goes on for multiple layers like that. But the number of layers that I'm allowed to have is constant. It does not allow to grow with h. So this is TC0. So what is the result that we get for this set of architecture TC0? So here is a result that already shows something. It's hard to see why it's connected to the Castance thing above, but maybe I'll try to explain. When the number of colors q is 2 and theta is large enough, it's close enough to 1, then in fact, in this class TC0, you can estimate the root as well as BP. OK, so even though BP had h level, this class had a constant number of levels. So maybe I should write this equal to minus epsilon. Minus epsilon, then this epsilon depend on the depth. And the conjecture is that this is 2 for all theta and q is equal to 2. So the conjecture is that for when q is equal to 2, in the linear case, actually circuits of bounded depth with this majority gates can do as well as BP. We know it only when theta is very close to 1. Because when theta is very small, we also know it because BP doesn't do anything. It just returns something independent of the root. So it's very easy to return something independent from it. So that's one result. And I'll just, OK, so that's one result. So maybe I'll draw just a picture in terms of what I said. So here is theta ks. So here we know that all algorithms fail just because information theoretically you can do nothing. All fail. So there's no complexity question here, right? You cannot recover the root because there's not enough information in the list. So it's really a question about what happened between theta ks and 1. And the conjecture is that between theta ks and 1, the kind of algorithm that we use in TC0 actually works in all of this regime. But I mean, right now, we don't know how to prove it. And I mentioned one more class. And this class is called NC1. So this is the class of order log n or order h in our notation, depth circuits. With n or a not gate, you can also include majority gates. It doesn't really change much for complexity reasons. And as always in complexity theory, there's something that's not known. So this class TC0 that we've seen before is known to be contained at C1. But we don't know if they're the same or not. So it could be that TC0 is actually equal to NC1. That's a big open problem in computational complexity, maybe not as big as NP versus P, but still a big open problem. OK. So one thing that's easy to do is that BP is in NC1. In the circuits where I use order log n depth, it's pretty easy to do things. But more interestingly, there is a broadcast process for which classifying better than random is NC1 complete. So I won't discard this broadcast process to you, but it means that if you want to estimate the root better than random for the broadcast process, it's a 3 by 3 Markov chain or 16 by 16 Markov chain. But I'm not going to describe for you. If you want to do better this random for this 3 by 0, 16 by 16 Markov chain, then you need the full power of NC1. You need the power that assuming that NC1 is not equal to TC0, it means that you need the full power of all the fraction or the same number of layers that you have up to a constant as the broadcast process. So in other words, unless TC0 is equal to NC1, order h depth or order log n depth is needed. OK. Maybe I'll just end with this conjecture. So we are very far from proving this conjecture. But the conjecture would be that for any broadcast process, below the casting stick, it's going to be just below the casting stick on bound. And where BP classifies better than random, classification is NC1 complete. So you really need the full power of NC1 in order to classify better than random when you are below the casting stick on bound. So this conjecture would say that we see the casting stick on bound again in terms of the computational complexity of learning the roots, not just with three architectures, but with any architectures, you actually need it to be. OK. So maybe I'll conclude here. I have more details about this proofs that if you'll ask me, I'll happily apply. But what's the conclusion of this maybe lecture or the series of lectures that BP is simple. It's running linear time. And above the case bound, it behaves like a linear algorithm. You've seen it both in the analysis of BP for the trip process. And we also mentioned, even though we didn't see the details, that in some sense, that's what we behave for the problems like the block model. On the other hand, BP is complex. Below the case bound, our understanding of what BP does for trees is that it does something very fractal-ish. That's harder to understand. We believe that the statistical computational gaps, both for phylogenetics and for block models, and maybe even for just for the broadcast process. And there's some indication that it requires depth and precision. So I think that's all I wanted to say in a high level. And again, I have more details about various proofs. So if you've asked any questions, I'm happy to answer. So I also wanted to thank you guys, the organizers, for organizing and for everybody attending. Whenever you agree to do something like that the week before, you're like, oh, why did I agree to do that? This is so terrible. I'm so stupid. But then when you actually prepare the lectures and talk about them and think about them, you realize new connections, new conjectures, having intelligent people listening to your rambling and asking intelligent questions is always good. So thank you guys. Well, thank you, Al-Khanan. Let's unmute the participants. So we can thank Al-Khanan for his three lectures. I can mute people, but they can unmute themselves for questions. Let's see. There's some thanks in the chat. Still waiting for questions. So maybe to start things off. So when you say that you can do things with these bounded depth networks, do you have some explicit construction for net? So let me tell you very good. You predicted very well some of what I do. So here's the TC node construction. I'll tell you how it goes. Essentially, you do the following. I'm going to do the bad thing that I shouldn't be doing, which is I'm going to take maybe some number of subtleties. And for each of them, I'm going to take majority. So I'm going to take majority of this, up to this, up to this, up to this, up to this, up to this, up to this, up to this. And I'm going to take some estimate of the majority would give me an estimate of the roots. These estimates are not optimal. These estimates are correlated with the roots, but they are not optimal. And then I'm going to apply BP on the rest for a constant number of levels. So even though this is big, this is actually small. So here may be five levels, and I apply BP on these values. OK, so this is very, very explicit. And for a constant number of levels, I can do whatever I want. And why does it work? It works because in the real regime that we know that it works. It works because we have a result in the context of which we developed in the context of block models, which your name on an island slide, that BP with noise classifies as BP without noise if theta is close enough to 1. For Q equals to 2. So it's not just, right, so we had this notion that noise doesn't change the threshold. But you can think about the situation. I have the broadcast process. So I'll remind you this picture. I have this broadcast process. And at the end, I add a little bit of noise, or a lot of noise, for every individual guy. Now, suppose I apply BP to this. So I already know by some robustnesses that this will be non-trivial. But in fact, it's not really only beyond non-trivial, but it's not really it's not really only beyond non-trivial, it will asymptotically perform exactly as well as BP. So the probability that you will be correct will be exactly the same. OK? So there's some BP add some error correction in this regime. Say again? And the noise is not so large that it overwhelms this. So this noise can be anything. This noise, you can be any number that's less than 1, any number that's less than half. As long as you, so let me maybe write it more. The noise goes to 1. So let me write the limit. OK, so does the limit edge goes to infinity? The limit of the noise, which I'm going to call to eta, that OK, we have to decide if we call it 0 or 1. So let me call it 0. OK? Of the probability that BP xh is equal to x0 is equal to the limit of h goes to infinity. Oh, sorry, yh is equal to 0. It's equal to the limit of the probability. So x is going to be this vector. And y is going to be the noisy version. yh is going to be equal to the limit as h goes to infinity. Probability that BP of xh is equal to x0. OK, so on the right hand side, we have the quantity that we like. That's the quantity, the chance that I'm going to be correct. On the left hand side, I'm telling you that for every value of eta, no matter how large the noise is, as long as it's a constant amount of noise, I'm going to get the same level of accuracy as h goes to infinity. So the limit eta goes to 0 can also be for every eta greater than 0. Instead of this, I can just write for every, no, it's corrected me, but I really wanted to write for every, so this is true for every eta greater than 0. So it doesn't matter how much noise you have. When you take the limit of the depth going to infinity, this I just do once, you will do as well as maybe. And for this application, you don't need to have eta going to 0 inside. No, for this application, we don't need the eta to go inside again. But if you want in the application, the statement is the following. If I want to do as well as be, if I want to do epsilon, if I want to do 0.01 as well as be, then there's going to be a bounded depth circuit, but the depth will depend on 0.01. Right. OK. Yeah, so this seems it's hopeless that this would work for large q, I guess. Yeah, something like that does not work for large q. So this is just for q equal to. For large q, we know that once you add noise, the y, bp of y and x are independent. When you're above the distance, they go below the q, away from the distance. Yes. OK, so Lynden was asking if you could go over the AC0. Sure, so AC0 is the definition quickly. So AC0 is the weakest class. AC0 is the class of bounded depth circuits with just n and all, no majorities. It's exactly what we had before, but no majority. And the theory means that AC0, if you are given any algorithm of AC0, AC0, and you apply it to this vector xh, you are going to classify x not just as well as random bits. So if I just apply a constant number of levels of n and all, from this I'm trying to estimate what the root is, then I cannot do better than random. I mean, this is not trivial to prove. I mean, I think, yeah, too oxy.
There is a very simple algorithm for the inference of posteriors for probability Markov models on trees. Asymptotic properties of this algorithm were first studied in statistical physics and have later played a role in coding theory, in machine learning, in evolutionary inference, among many other areas. The lectures will highlight various phase transitions for this model and their connection to modern statistical inference Finally, we show that perhaps unexpectedly this "simple" algorithm requires complex computation in a number of models.
10.5446/55680 (DOI)
During the talk, you can ask me to chat and we'll relay the questions if you want. We'll try to answer if possible and otherwise we'll relay the questions to Shoshan directly. We should do a break after 30 minutes where you can ask some questions and at the very end of the talk we'll stop all recordings so that you can ask questions with your mic if you want. At the same time, if you don't want to have your face on the internet, please just turn off your camera and your mic. Just a reminder, this is going to be a course in three parts. So there's obviously two days and then tomorrow and Friday the classes will be at 12pm Eastern time meaning one hour and 30 minutes earlier than now. So after this brief technical introduction, I'm very happy to introduce Shoshan Ganguly from Berkeley and he's going to give us a mini course on large deviations for random networks and applications. Okay, so should I start? So okay, let me try to sort of share my screen. Okay, so this is visible, right? Okay, sometimes Zoom might freeze and so I might be writing and things might not be updating so if you think that it's been too long without sort of seeing any change, you should point that to me and then I should, but I do something about it. It should be fine, but sometimes it does that. Okay, yeah, so thanks a lot to the organizers for the invitation and it's a great sequence of topics. It's a great way to keep everybody excited about something amidst all these sort of grim circumstances. So what I'll talk about is a topic that has seen a lot of activity recently and it sort of has to do with large deviations for some canonical nonlinear functions of independent random variables and so I'll try to give a broad sort of overview of some of the progress including some of my own work, but there's so many new results coming up every day. It's sort of hard to keep a tab on everything so if I sort of, if you think that there is something that I should be mentioning and I am missing somehow so you should feel free to point that to me. I'm also sort of typing up some lecture notes. So it might have some typos. It's been rather quick, but I'll send them to the organizers after this lecture and they will contain a super set of what I'll sort of talk about today and over the next few lectures. Okay, good. Okay, so large deviations for random networks and applications. So let's sort of start with the motivating sort of question. So let's start with an industrial random graph. So this is a random graph. I'm pretty sure all of you are familiar with this but just to sort of give things concrete. On invoices where every edge occurs independently with probability p. Okay, so you have some big random graph. And now let's fix some subgraph H. So some fixed graph H. You can think of it as a triangle. So for the moment, just key three or a triangle. This is a complete graph on three vertices and the object of interest is let's say XH, the random variable, which is the number of copies of H in the random graph G. Okay, so G is a random graph and I look at the number of triangles in G. So it's pretty easy to compute what expectation of XH is. And that's just, I'm looking at, let's say labeled triangles. So then it's just up to smaller terms, it's N cubed P cubed. And so the question of interest or the motivating question is the following. So I just want to understand atypical behavior of this random variable. So in particular, I want to understand probability of events like the following. So delta is some positive, let's say fixed number. So XH is typically like expectation XH. So people know a lot about it. So it sort of satisfies a central limit theorem and whatnot. And so it's really concentrated typically around expectation of XH. And I want to understand atypical behavior. And so this actually goes by the name and has been studied for a while now. And it's defined by Janssen and Ruschinski in 0, 2. Okay, so another sort of related question is the geometric one. So this is something about computing the probability of a certain event. But I can also ask a more geometric question. What does, this is going to be slightly vague at this point. What does a random graph G look like given the event, let's call this event for the moment as A given the event A? Like what is sort of a geometric manifestation of this conditioning on this event A? Do you sort of tend to see sort of more, so typically if you have a graph on size N with edge density P, roughly we'll have N2 times P many edges. But maybe if I tell you that the number of triangles is much larger than typical, maybe the number of edges actually become much larger. Or there could be some other sort of geometric consequences of this conditioning. So these are roughly the two sort of guiding questions that will dictate whatever I'm going to be saying over the next few lectures. And so before, so okay, so an observation or just a fact. I'll be more precise, but like it's sort of not hard to see that XH is a polynomial of independent Bernoulli variables. So the edges of the graph are independent Bernoulli variables and the number of triangles is in a graph in this random graph is roughly a polynomial of degree 3 in this random variable in all these independent bits. Okay, so any questions so far? Actually, let me just make sure that I am a sort of the chat box is visible to me. Okay. Okay. Okay, so, okay, so this is sort of the question but let's see some sort of classical results about linear functions so so the hardness is that this is a polynomial. So let's just sort of recall some classical concentration and large deviations result for linear functions. So let's just sort of look at a very basic setting so let's first start with the zoomer of the inequality. Let's say are independent mean zero random variables XI is less than so let's say I have a sequence of independent random variables so that the I random variable is between AI and VI for two constants AI and VI almost surely so then I want to let's say find some concentration estimate for the sum so SN let's say is the sum of the size and I want to understand let's say probability so SN have of course as mean zero let's let's say I want to understand probability SN is bigger than T for some T. Okay so this is a very classical thing and everybody is sort of familiar with it but let's sort of just review how to maybe prove such a concentration result and the method is sort of super classical but sort of it turns out that even generalization of that will help us understand the random graph question that I started with. Okay so the strategy is to use the remaining strategies so you can use the quality of whatever but if you want something stronger is to compute exponential moments and then apply Markov okay so like for example like this thing is true that expectation of e power theta xi so theta is some parameter that we will sort of optimize over that thing is less than e power theta square VI square minus AI square sorry that's not very many maybe by an eight so if you have a random variable xi which is mean zero and supported on between AI and VI then you have this upper bound on the exponential moment okay and so let's say let's call this as maybe CI this tells you that expectation of e power theta SN is less than equal to e power theta square summation CI square by eight okay and now suppose I want to understand what is the probability that SN is bigger than T then I can sort of just which means that this guy is bigger than e power theta T so this thing is by Markov less than e power theta square summation CI square by eight minus theta T right and this bound holds for all theta and now I can optimize over theta and then optimize over theta right so the general strategy is you compute exponential moments and then apply Markov in sort of an optimal way okay so this gives you just concentration but now let's look at some sort of large deviation so again so look at a special case where xi is our current tosses so let's say xi is our all iid and so now let's say I want to understand the probability that SN is bigger than n times q but q is bigger than p so SN the sum of xi's is typically like np plus some fluctuation of order square root n but now let's say I want to understand what is the probability that SN is super large okay I can sort of do the same strategy so let's say lambda theta is the log of this exponential moment joining function, log of the moment joining function so in this case it's going to be p times e power theta plus one minus p so if I look at just one bit e power theta xi is literally this and so I'm just taking the log of that so if I follow the same strategy the bound that I get is e power n lambda theta minus n times theta q so this thing is less equal to this by the same strategy and okay again I can optimize our theta so this is, so if I optimize, so I can look at this variational problem which is let's say theta q minus lambda theta so the theta that maximizes this if I plug that in that's going to give me the best possible bound here and it turns out that the dual, so this is again, so this is a convex function and this is the Legendre dual of this log moment joining function and it turns out that this is what is known as a relative entropy so this thing is actually the relative entropy of Bernoulli q with respect to p which is nothing but this okay so using that I get the probability Sn bigger than nq is less than e power minus n by p of q without actually any error, so this is sort of an error free in some sense okay so that's an upper bound but to get sharp results you also need a lower bound and so that's why your standard technique called tilting so the strategy is to do a change of measure which makes the atypical event typical so Sn bigger than nq is atypical under the product measure where everything is a Bernoulli p but it's actually pretty typical if everything was a Bernoulli q so you can sort of, you want to estimate the probability of this event instead of measuring the probability of this event under the original measure you can measure the probability of the event under the new measure which makes it likely and then to get back to the original measure to get back to the original measure you can also estimate the random nicole derivative within the two measures right so I'm like it just sort of abstractly so suppose I want to measure the probability of an event a under p so that's the same as so think of some other measure q and it's the same as saying e power log dp dq dq of a well I just did not do anything so e power log dp dq is literally dp dq times dq is dp so this is indeed p of a but dq puts a lot of mass on a and if I can actually understand this thing reasonably sharply then I can get a lower bound okay so the upper bound is by this exponential moment argument lower bound is by some change of measure and computing the the random nicole derivative okay okay so this is sort of you should sort of keep this strategy in mind and this is something that will come up in our understanding of the events in the graph setting as well okay so so again back to this random variable xh so just to sort of concretely xh recall xh is the number of copies of h in some graph g so so just to sort of formalize the notation more generally so for any graph h and g let's call tag as let's say so n to the size of the vortex so I'll explain what this is so g is let's say a graph on n vortices with adjacency matrix a and h has let's say k vortices so this is so basically the this is the num this is essentially the density of the number of copies of h inside g so I want to map h inside g so let's say the k vortices of h get mapped to the vortices i1 i2 ik in my big graph g and then what I want is for all the edges in h to map to some edge in g and and aix iy is the ix iy at the end of the adjacency matrix of a which encodes whether in g there is an edge there or not so this basically says that what is the probability that what is sort of the average density of h inside g in principle in which means basically if you take a random bijection what is the probability that it will end up being actually h okay and so to answer the question that what does the graph let's see so recall the geometric question what does g and p look like given t hg is large now in the coin tossing example one thing I should have mentioned here in the coin tossing example the probability that the sum of density p typically but if I tell you that the sum is actually much bigger than typical and it's nq then this essentially indicates we also imagine Lord born by the stringing strategy that every coin basically roughly now looks like a born with q so the sort of in some sense optimal strategy for some of independent coins to have a large value is for each of them to have a higher density in some sense so in terms of intuition you can also sort of try to guess what the answer to this question is and answer of a reasonable guess could be that g and p continues to look like a random graph it is any type graph but with different densities in principle it can also happen that different edges offer with different probabilities so it can look like an enormous in this random graph Oh there are some questions yeah sorry yeah so I'm like I started with every edge being property p but maybe some edges now have property p1 some other edges have property p2 in some sort of optimal way so that it makes it likely for the number of copies of h to be large okay and so this is what we will try to establish over the course of this lecture so it will be convenient to write so when I'm al when I'm saying that it looks like an additional type graph or it looks like some graph it's sort of to make sense of this one has to define some notion of distance on graphs so so this is what we will do next we find a metric on graphs and embed them in the same space and so this is a this is our definition so let w be the state of all symmetric measurable functions from 0 1 square to 0 1 so essentially you have a function on the square which is symmetric and takes violence in 0 1 so you think about some this as some continuum limit of graph so these are what will be called as graphons so note that any finite graph naturally embeds in w as a 0 1 valued step function right so I take a graph for any finite graph there is an adjacency matrix a right the entries of the matrices are it's a symmetric matrix the entries are 0 1 depending on whether a particular edges present or not so the graph this matrix is a square of size n by n now I can just literally scale everything down to a size 0 1 by 0 1 square and the entries here become small boxes so so graph basically it can be identified with a function which looks like this so these are 1 by n by 1 by n squares and these are 0's or 1's depending on whether the adjacency matrix has a 0 or 1 right so every graph and actually embeds into the space any questions about this oh can you give yeah I was just going to mention there was a question from before the definition about the intuition for in home right so don't know that it's a sometimes it's actually not the most optimal thing to as we will soon see it's not the most optimal thing to actually raise the density of all the edges because that's very costly you can actually only affect a small part of the graph to get the required both so you can you can sort of let's say put in a bunch of edges but much smaller than the total number of edges very tiny part of the graph in a compact way so that it actually gets you the additional boost in the number of each concept you need does that roughly sort of answer your question like it's not probably always optimal to increase the number of edges because it's very costly like it's there are more optimal ways to actually pack smaller number of edges in a compact form to get the additional boost okay right so so this is the space in which you will embed every graph and now let's define the metric so for f and g in w so define so let's constable define let's define this distance between f and g as a following so you take supremum over all subsets s t sorry so take subsets of 0 1 so take subsets of 0 1 and now you basically so essentially you basically take so these so you should think of 0 1 as your set of notices so these are all the continuum graphs on the unit interval so you take two sets s and t and look at the difference in the number of edges going between s and t in the graph on f versus the graph on g and you sort of maximize over all such choices of sets s and t so basically this is known as the cut distance so essentially what it is doing is it's sort of so two two two functions f and g are close in this distance if they're all the cuts are roughly preserved so the cuts meaning you have two sets and there is a cut between them and you're looking at the number of edges going across them and this is indeed measuring the difference in the number of edges going across the two sets is the definition clear I guess it's hard to sort of make a word okay and okay and so it also turns out that okay so since we since we don't care about labels of the graphs we should identify graphs or graphons which are the same up to a re-leveling so what does that mean so precisely it means that that's a sigma is a measure reserving bijection between on 0 1 and so then if there exists the sigma such that the distance between f and g composed with sigma is 0 so I can sort of so these are functions on 0 1 square but now I can sort of re-level them and change their labels around so I should sort of think of two functions at the same if they're the same up to some re-leveling and so this is what actually quantifies that so this is what this is the quotient space so we'll actually work with the quotient space so we would work with the quotient space double tilde via this equivalence now okay so so I want to answer the question of what is the probability of a certain subgraph counting large I know how to answer this for coin tossing experiments so I want to sort of try to use that strategy to answer this question so for example let's look at a slightly easier question so suppose I have a graph I will identify the graph with a square and let's say I have I divided into block graphs like blocks so there are four parts let's say right so I have divided my big graph into four parts now typically the density of so typically the density of edges between a1 and a2 is p right so all of them are roughly p so this basically I'm like if you zoom out I'm like the actual matrix is zero one value but on an average everything looks like p right so so if you so on an average everything looks like p so it turns out that a graph if you if you take a graph which is g and p sample from g and p that graph is actually pretty close to the constant function p right so so all my graphs live on the same space so let's say sample some n size graph from the measure g and p the claim is because of what I just said because on an average the density of edges across any two sets is p the graph thought of as a graphon is actually very close to the constant function p okay but now I can ask the question because a related question is the following let me just copy this okay so instead of looking all p I can actually now okay so so maybe this looks like p1 this looks like p2 maybe this looks like p3 this looks like p4 and so it's symmetric so I can actually sort of prescribe different values to different blocks and I can ask what is the property that the graph actually looks like this instead of everything being p and notice that this is exactly the coin tossing problem okay so I can actually solve this question problem right so so my graph is made up of coin tosses which are all borderly p now suppose I tell you that this looks like p1 which basically means the total number of edges in this is p1 times the total number of possible edges and similarly all the other things so everything was initially p but now I've sort of prescribed different weights but this is exactly saying some bunch of coin tosses has a different sum than what its typical value is is that clear if I tell you that what is the property that my original graph looks like this block graph it's essentially the same as the coin tossing problem and so actually the property of this using the same reasoning using the same reasoning using the same reasoning using the same reasoning as the coin tossing experiment as the coin tossing is the following it's exponential of something like a1 juice2 ip of p1 plus a1 a2 ip of p2 and things like that right so how many coins are here it's size of a1 juice2 and the difference between a1 and a2 is size of a1 times size of a2 and so exactly and these are all independent so by this coin tossing experiment the property of getting such a weighted block graph is exactly this okay so so going back to our initial sort of guess that telling you that the number of triangles is large makes the graph look like a enormous random graph it's sort of reasonable to try to understand what is the best possible choices of these values such that this new set of values makes the atypical event typical okay so this is so so then so okay so this is sort of the variational problem that I so this is sum over sum over sum over sum over sum over so let's maybe so let's maybe so we're showing you the question you meant that all the terms in the exponential oh yeah yeah yeah so everything exactly it's literally the coin tossing experiment but repeated for every block and their blocks are independent so the properties are multiplied and each of them is exponential of the minus of the relative entropy but okay so I can choose so I can try to choose different weights so Q ij is such that okay so sorry I'll just use this one so I have my original graph G which looks like a constant matrix P so for every ijth entry I put a Q ij so Q matrix is the new weighted matrix is a new weighted graph right so this is the strategy that I'm sort of I'm extending the previous strategy so I basically choose all possible weightings of the graph but the ijth entry has the new weight Q ij the entropy cost is Ip of Q ij and I want to so the weighting should satisfy that under this weighting the number of copies of h inside this new random graph is exactly what I want it to be which is bigger than 1 plus delta times the original expectation okay so this is I'm like by this strategy this is the best that you can do if you can if you think of everything if you think of this in a way this random graph as being the actual candidate then this is the best possible candidate the Q that minimizes this minima okay because the property is going to be exponential of minus of this so smaller this is the higher the probability is so any questions about this so is the best probability one can get by this strategy of considering in homogeneous random graphs okay so this actually gives you a lower bound in some sense this is again like a tilting type strategy where I sort of only look at a particular class of graphs these are the enormous random graphs and I look at the one which actually makes the probability the most which makes it the most likely the one which has the least amount of entropy cost but how does one prove that this is optimal how to prove that this is the optimal strategy and this is where and so the key boswod here is so this is the key result from the so this is the key result from also sort of had many applications in a Godic theory so this basically says that basically any graph can be approximated by a sort of block random graph so I will be slightly more precise but sort of roughly this is any graph can be approximated by a block random graph where the number of graphs is only a function of the error and not the size of the graph so think of every graph so every graph lives in this graph on space which is all of unit order so suppose I take any graph and I want to sort of find so this basically says that there is a block graph with the number of blocks let's say only k so as the distance in this cut norm that I define the distance between the graph that I started with an arbitrary graph and this block graph is going to be less than epsilon but the number of blocks which is k is only a function of epsilon so so I'll see the weaker version of this so the weak regularity lemma is going to be less than epsilon by freezing canan very sure Jandini was a question before someone was wondering why do we hope that the correct tilting measure should have a dependent this is exactly why because it says basically that every graph looks like a block independent graph so previously the strategy was to sort of just make a guess and then sort of that will give you a lower one of the property that's one of the possible candidates for the new measure but the regularity lemma tells you that it is actually will be optimal because actually any graph looks like an enormous random graph with the number of blocks which is only a function of the error does that roughly answer the question so let me sort of continue with this and it might be sort of slightly more clear once I sort of state this so essentially it basically says the following so given any graph any graph there exists there exists a partition of V into K classes V1, A2, AK such that such that if you look at this block graph where rho ij so where rho ij rho ij so where the ij entry of this block graph is let's say rho ij so rho ij is actually rho ij is actually rho ij is actually rho ij is actually rho ij is actually so I have my original graph G right and I have the matrix V1 through K so it says that if you now look at the block graph where it has K blocks on its side where the ijth block has the entry rho ij rho ij is this is the number of ij just going between A i and A j in my graph G divided by size of A i times size of A j this essentially means what the edge density is in my graph G across A i and A j that there exists a partition into K classes such that the distance between G, the original graph, thought of as a graph on maybe, and this new block graph, let's call the G quotiented by, or maybe that's dependent, quotiented by this partition. So this is the partition, let's call this partition as P. So GP is my notation for this block graph. So it says that this error is no more than order one over root log K. So you see that the graph can be huge, but if I think of it as a function in zero one, and I want to approximate it by something which is like, let's say epsilon, then it suffices to actually partition the graph into number of blocks, where K is such that one by root log K is epsilon. So the size of the graph is actually not very important. The number of blocks only depends on the error. And one key thing, one crucial property of this distance, of the cut distance is the following. That is what is known as counting lemma, meaning that if two graphs are close in this distance, the number of triangles between the two graphs will also be the same. So also we'll be pretty close to each other. So actually, so for any graph H, so here is the statement, so fix H, and graphons F and G, then if you look at the density of H in F minus the density of H in G, that thing is order, which is a function of independent, the cut distance between F and G. So it says that if I'm interested in the number of triangles, let's say in my graph G or a function F, and there is another graph on G, which is pretty close to my function F, in the cut distance, then the number of triangles in the two graphons are also very close to each other. Okay? So this essentially sort of gives you that this strategy should be right. So I want to look at the problem of all graphs where the number of triangles is large. The regular dilemma tells me that there is a block graph with not too many blocks, such that this block graph is close to my graph in the cut distance, which basically means by the counting lemma that the number of triangles in this block graph is also pretty close to the original graph. Okay? And so basically it means that you can actually approximate, so plugging using the above. We got a couple of questions, people. So row is this even if I equals J and the other one was, a lot of person was wondering if you want to use the contraction principle. Row is this even if I equal to J. So if I equal to J, then actually you count every edge twice. Essentially it's basically either you count every edge twice or you hear, instead of looking at AI squared, you look at AI choose two, whatever. Like so basically it should be the edge density. And the second question is it, is it because you want to use contraction principle? At this point actually no, because I'm not proving a full large deviation. So what I'm just doing is I want to just, so I already told you how to compute the probability that the graph approximately looks like a block graph. And then the point is I want to then union bound over all possible block graphs. Using the above, does that actually answer the two questions roughly? I believe. Yep, you're good. Using the above two facts. And can compute the probability that G looks like a given, given block graph. And then union bound over all possible choices of block graphs. Right, so the regularity lemma tells me that it looks like some block graph. Neither do I know exactly what the parts are, nor do I know what exactly the edge densities are. So what I can do is I can actually, because the number of parts is not too large and the edge densities are all between zero and one, I can union bound over all possible choices of block graphs, meaning basically consider all possible partitions of V into K blocks and all possible edge densities, RUIG up to an epsilon error. So of course there are infinitely many values of RUIG between zero and one, but up to an epsilon error. So basically you can take zero one and divide it into a mesh of size epsilon. And you can look at all possible edges and cities from this mesh value, all possible blocks which partition this full set. And then for each of such block graphs, you can try to use this bound, the property that my graph looks like that particular block graph is this. And now I can union bound over all possible block graphs. And you will see if the union bound is over not to be set, the upper bound one gets is again e power minus whatever I wrote plus smaller order term. Because remember this was the property of any particular block graph. And phi hnp delta was the best possible such block graph. So all these properties are at most e power minus this. Now if the union bound is of a smaller order term, then this will be still dominating the entire union bound even after union bounding over all possible blocks and edge densities. And so this is the claim here. So the point is this fails if p is going to zero with n faster than a polylog. Oops, sorry. And so the original, the full. So somehow the full LDP on graphons for, there's a bit of a lag for a fixed E was proven by Chatterjee and Varadhan in 2011 and the argument above, which is more commentarial, similar but more commentarial, was by Dubaski and Perez, I think in 20, maybe 14, maybe 2015. Yeah, so what the point is this union bound becomes too large or the possible choices of densities and the blocks become too large if p is going to zero. And this is because of this bound. So the error that I'm allowing here is one of a root log k. Yes, some of my iPad is hanging a little bit. Yeah, so the error that I'm allowing is one of a root log k and it turns out that if p is actually going to zero, then this error must also depend on p because you see that the number of triangles is like roughly p cube. And so because the error has to then depend on p, then k will become so large that the union bound will actually fail. So this strategy actually sort of is ideal for p constant, but it can be pushed to p going to zero with only logarithmically pass rate and it sort of completely fails when p is going to zero as a polynomial in it. And so this is what actually we will sort of talk about tomorrow and the day after, but I'll also sort of start tomorrow's lecture with some discussion on what this quantity looks like. So this is an abstract thing which is the minimum possible thing, minimum possible entropy cost over all possible in homogeneous random graphs. Now, as was asked before, why is it not everything becomes from p to some larger value? So it indeed will be the case that for some values of delta and p, this thing is indeed, the solution is indeed that everything should be homogeneous with a different value than p. But then there are other values of p and delta for which this is not going to be the case. And actually the optimal result is not to change everything, but change only some parts of the graph. And so this is what we will start with tomorrow and then we'll actually talk about how to sort of modify the arguments in order to be able to treat sparse graphs where p is really going to zero polynomial in the system size. And this nine union bound strategy will not work there and one has to come up with slightly clever ways to cover your own space so that the total number of things that you are considering is not too big. So I think I'm out of time. So I'm happy to answer questions, but I'll just stop here. Okay. So first of all, let's give everyone the opportunity to unmute themselves and we'll thank Shoshendu for this. So I see a question here and yeah, so basically this result that I presented from Lubevsky-Zao which uses this big regular dilemma, the bound, sorry, the whole thing is slightly slow, which is unfortunate. Yeah. So this bound here actually allows you to take p to be faster than let's say one over log into the one six or some sort of polylog. Yeah. So this approach, which is pretty similar to strategy in Varadhan actually allows you to take p going to zero, but only very small.
While large deviations theory for sums and other linear functions of independent random variables is well developed and classical, the set of tools to analyze non-linear functions, such as polynomials, is limited. Canonical examples of such non-linear functions include subgraph counts and spectral observables in random networks. In this series of lectures we will review the recent exciting developments around building a suitable nonlinear large deviations theory to treat such random variables and understand geometric properties of large random networks conditioned on associated rare events. We will start with a discussion on dense graphs and see how the theory of graphons provides a natural framework to study large deviations in this setting. We will then primarily focus on sparse graphs and the new technology needed to treat them. Finally, we will see how the above and new ideas can be used to study spectral properties in this context. If time permits, we will also discuss Exponential random graphs, a well known family of Gibbs measures on graphs, and the bearing this theory has on them. The lectures will aim to offer a glimpse of the different ideas and tools that come into play including from extremal graph theory, arithmetic combinatorics and spectral graph theory. Several open problems will also be discussed throughout the course. The lectures will not assume anything beyond familiarity with basic probabilistic concepts.
10.5446/55681 (DOI)
on these mini-course, on large deviations on random graphs. All right, so yeah, thanks. Let me share my screen. So. Okay. All right, so we'll carry over from where we left off. I'll briefly sort of recall what we did yesterday and then I also sort of. There will be lecture notes available online about everything that I'm going to talk about and actually more and also the things that I'm writing up during the lecture will also be posted up for every lecture. So I think you should expect the material from yesterday to be online very soon. Okay, so just to recap. So yesterday we were. We saw how for P not to small. The property of this daily events. The H is the number of copies of H inside G. You can also think of it as homomorphism counts. Because for P not to small those things are pretty much the same. This is roughly exponential of something, which I was calling, I guess, H and the delta plus maybe some correction. And this was an optimization problem. This was a solution to an optimization problem, which was, you look at the best possible. So, Q is a weighted graph. You look at the relative entropy of the increase of Q and with respect to P and Q is such that Q is actually typically pretty large. Okay, so this is where we stopped yesterday. So, so roughly for P not to small the optimal strategy is to have a random graph, which makes, which makes the typical event that you want to attain typical. And the cost is the relative entropy just to be concrete. This is Q is a weighted graph. And just to be concrete, this is some, this is a symmetric matrix. But let's just do I not equal to J relative entropy of IP of Q ij. So this is something that we defined yesterday, which was a relative entropy of a bornally random variable with mean Q ij with respect to bornally random variable of mean P. And you sum over all things, but the degree of freedom is actually not the full matrix, but only half of it because it's symmetric. And so basically, the every term actually gets counted twice. So essentially, actually, this is, this is the cost. Now, at this point, there are a couple of things of interest is how what is the solution. Now, for again, for, let's, for the moment, just stick with P. By the way, I should actually sort of, yeah, I forgot to sort of open my chat box actually that's. Okay, sorry. Okay. For the moment, let's just briefly spend some time and piece that's a fixed. Although yesterday we did talk about P going to zero with an on very slowly maybe but still he was allowed to be going to zero. But let's for the moment assume that P is fixed. And then actually this quantity which is an N in it has actually related to a continuum problem. So, so there is a graph on where, let's say I just and so this is, this is going to be exactly the same as this but instead of a weighted graph of size and and look at a graph on. So again, it's the same thing. So this is going to be a graph on and I essentially minimize our IP of q a graph on such that something pretty similar to this holds. Right, so now I want to sort of take a continuous function on 01 square. But I guess I don't have to recall everything from Easter's lecture but we sort of briefly talked about graph on switch where the continuous version of graphs or everything graphs actually naturally embedded graph on and these are just zero one valued functions which is symmetric on 01 square. So IP of q is now just this sum is actually just going to be replaced by an integral. So it's literally have integral IP of q xy. Okay, so this is a national way that the discrete problem or transforms into continuous problem. So this is sometimes simpler to work with the continuous problem and it's actually not very hard to go back and forth in the two. Okay, so so instead of looking instead of asking this question also as the same question about five HP delta. So, so national question is when and also let's say the arg max so this is an infimum everything is actually compact but but you can ask about what is the graph on that minimizes this when it's five HP delta attained at constant function. So it's the same as roughly saying same as roughly saying the large division event is G and P, which was the original measure that I started with behave like G and R or some are like in this case we're looking at opportunity events but you can also look at Lord events for the moment just let's fix. Let's just sort of focus on opportunity events. So we had this last question in the past like when do you expect an additional graph with density P under this conditioning to behave like a behave like an additional graph homogeneous where everything sort of gets simultaneously raised to density R. Now this is so you have to sort of solve this. So this is a convex function in queue relative entropy is a convex function in queue. But the set on which you're minimizing it is not convex it's actually not super clear how to actually solve this optimization problem. So let's just run this out. Okay, so, so the way it's case out using generalized form of just an equality and the above region exactly when age is regular so age is a graph that we're embedding so age is a graph whose number of copies we're interested in. We want it to be larger than its typical value by your multiplicative factor. And we want to understand the graph on which has that property typically and has the smallest smallest entropy cost with respect to the original measure which is constantly now like I said so this is sort of a non trivial problem but it turns out there's a sort of nice application of health and equality which only works for regular graphs and I will tell you roughly this for let's say triangle so let's say H is a three so the typical density is density is pq in the graph on sense so there is no n anymore. And let's say I want want to achieve density r cube where r is bigger than p so it turns out roughly this is a description so look at the function so I described the relative entropy IP of x is a convex function but now let's say look at the function x going to IP of root x now this is not a convex function anymore in fact the graph of this function looks something like this so this is this is p squared because IP of p is zero but it's not convex and but now I can so now the solution what the solution of Lubeck is always now you look at its convex minor and it's look at the largest convex function which is below consider convex minor and which is the largest convex function which is below the original function and the claim is if r so r cube is the density that I want to achieve so the claim is if r squared comma IP r is actually so this is a point on the function actually because IP of r is literally IP of square root of r squared so this is a point on the function but the point is if this function if this point also lies in the convex minor end then the answer to this question is r so if you look at the graph on which has the smallest entropy with respect to p but has density at least r cube then the then the graph on that minimizes that entropy problem is actually the constant function r provided this condition is true then why sorry then then the constant function r is the optimal solution and it turns out that it was also shown that if this is violated if violated then there is a block graph on construction which does better so this basically classifies the region of space where the constant function is the optimal solution versus where it's not and the graph of this region if you look at the phase diagram um so let's say this is a PR space so I'm looking at the upper tail problem so I'm looking at only when r is bigger than p so it turns out that there is a curve like this so as that this region is the replica symmetric region which is sort of a physics jargon for saying that r is the constant r is the solution versus here where there are better candidates is this color actually visible maybe not too much now okay but this is all I'm like it turns out that this r squared to IPR or x to IP root x for a general graph for a general regular graph this becomes x to IP of x to the one over d but d is the degree so for any regular graph a similar story is true where you instead consider the function x going to IP of x to the one over d but d is the degree of the graph but the point is this is only this sort of argument only works for regular graphs because like I said so so entropies are relatively nice function but the set on which it's maximized or minimized is actually a non-trivial it's a non-trivial non-convex set but turns out that the homomorphism density is something which is also sort of hard to work with and it turns out that if there is a version of Holders inequality that actually relates so the sort of key point is Holders inequality sorry allows to pass from homomorphism densities to various norms of the graph and that actually turns out to be a useful thing but this sort of argument currently is only sharp for regular graphs and it's an open problem to find the exact phase diagram for any connected non-regular graph so the bond that the Holders inequality will give will only depend on the max degree and it turns out that that's not sharp for non-regular graphs so just want to comment a bit about the carrier of the non-regular graph we have an unpublished actually it's been around for something unpublished result which improves the non-linear for K12 so even for a path of length 2 this problem is getting the exact region is open like you look at all graph so you look at all graphons density is p2 you look at all graphons which has density which is bigger than p2 let's say r2 and you want to look at the bond which has the smallest entropy cost I want you to understand the exact region in which it's the constant function is the solution versus where it's not and if you apply the standard Holder bond from this so let's say now then the bond that you get for the two path a region that it's not different from what you get for the triangle because the Holder bond only depends on the max degree and the max degree of the path of length 2 and the triangle both of them are 2 and so you don't get any improvement but you can actually work a little bit more come up with some better inequalities and actually get something slightly better but but it's still not sharp and I yeah so this is an any progress for general graphs would be fantastic so by slightly better you mean a slightly larger region yeah yeah yeah well things are where you have this yeah so yeah so very up-symmetric yeah yeah so so the region for the two two star looks something like I mean like the bond that we have looked some I'm not sure what the figure looks like but in principle something slightly beyond what what is what the triangle is it clear that you always have some non-trivial region where you do not have the symmetry so for p yeah for p very small I'm like as you will see so so far we've been talking about p being fixed and so now we will soon sort of transition to p-spars so the first comment is for p-spars where p is going to zero with n which actually also kicks in when p is super small you can really sort of do better by having some block structure and instead of having a constant so so there are some questions here and that's just asking for which graphs so okay so one point here so for no graph the non-constant maximizer is known like it's known that there is something which is better than constant but for p fix there is no graph for which it's known what the optimal solution is in the region where it's not a constant does it answer okay yeah so yeah so for p very small you will see that actually having a tiny blocks will actually be more optimal okay good many further questions okay so so there are actually simulations so so simulations predict not literally for this model but it's actually generally believed that it should be the same for this model as well but the related models where simulations are actually done that optimizers in the symmetric breaking region should be stochastic block models with two blocks I think that's what that's going to show so so the solution should be actually not a constant when it's not a constant function it should be like a roughly approximated by a four block graph on the different densities across the blocks and and inside the block so that's what there's been extensive simulations done by Charles Redding and his group I don't know so okay so a lot of things are actually known for sparse graphs where you actually can handle entropy so for dense graphs this is roughly the story yeah so this is somehow yeah it's not super clear analytically why it should be but maybe you can think of doing some photobitve argument to say that oh actually if you have multiple blocks maybe you can actually improve things by some sort of convexity argument to sort of just go to two blocks not four blocks two blocks like four blocks in the four blocks literally but actually two partitions that's what is expected to can you scroll the tablet but your item is not feasible oh actually I see so did it freeze I see sorry this is it let me try it yeah okay sorry oh how long has it been not visible sorry see when you started writing the simulations that was not visible okay yeah so yeah simulations predict that optimizer should look like this I say sorry yeah so I think it should be fine sorry thanks Melendia okay so I'll soon sort of transition into sparse cases where actually you can do a lot because you have a lot more control on the entropy but one more thing that I wanted to in the dense case want to talk about yeah so recall so for which for which age are these simulations so like triangles paths of lent to like and also like it's to be completely precise I think the simulations are not for exactly the setting that I'm talking about it's about related settings of exponential random grass which also have soon going to our constraint models where where you sort of look at all graphs uniformly picked from a subset of graphs which is a given number of edges given number of triangles okay so relative to what Omar said so now we will sort of talk about a particular class of measure so recall a standard way to prove large deviation bounds is by computing exponential moments so which actually relates to so this naturally relates to so this naturally leads to study of class of gives measures on graphs of the following form so probability of G is proportional to exponential of some Hamiltonian and and the Hamiltonian roughly looks like the following so so let's say so everything is size and so n squared times let's say summation I equal to one to s maybe beta I the age I G so let's say you fix graphs fix graphs each one H2 HS so each one could be an edge H2 could be a triangle H3 could be a four cycle or whatever fix some financial collection of graphs and now you want to charge a particular graph the measure should be charging it exponential of roughly related to the densities of this subgraphs into G okay so which means basically if the beta is a positive enlarge maybe then more mass on G with high density okay so this is known as the exponential random graph model or your GM on other hand the beta is a negative then you actually puts more than you actually put more mass on graphs that actually let's say let's say there was only one which is let's say a triangle so let's say there was just H1 and H1 was a triangle and and beta I was a large negative number then you actually put much more mass on on on graphs that actually are very few triangles and then you would expect things like oh actually then the masses more or less concentrated on graphs that are mostly like bipartite so if beta is a negative then subgraphs are avoided okay now so this is not a property measure you have to normalize by something to make it a property measure and let's call that normalizing constant as Z so Z is the normalizing constant which makes which which means that this divided by Z is now a property measure and let's call psi to be let's say the log so this is also sometimes called the partition function and so let's call psi to be the log of the partition function so it turns out it's not surprising that large division theory can be used to study your GM and in particular the free energy or the log partition function in okay so what is what is Z so Z is sum over all graphs exponential of beta exponential of H so which is n square summation beta I the densities what did I write yeah the densities now okay so so I don't remember exactly what I did in the last lecture so THIG should have thought of as a density of HIG this is an order one quantity so you basically normalize everything by all the factors of n so this is supposed to be so for example if HIG was triangle and G was an adhesion graph this was this would typically be like p cube not n cube p cube another point is you multiply by n square because that's roughly the entropy of the full system because there are roughly n square edges so you want to understand that and turns out that so so so you can actually use large divisions to answer this so psi roughly looks like n square psi is the log of this n square times a variational formula so so over all graphons W hmm actually let me let me just make it normalize to you let me just normalize it so H is this and let's normalize so H is just again an order one quantity so that it makes sense for graphons so then H of W minus well I can also normalize this like this just to make sure that things are normalized so this this is then so essentially what it says is I want to sum over all graphs exponential of this Hamiltonian now the Hamilton is literally sub graph densities so what it does is that so if you actually use a large division framework of charging more than for graphons so this sum you can sort of divide your space of graphons into various corners depending on what the value of the Hamiltonian is so this is a constant this is a continuous formula so this is the space of graphon so maybe you partition it okay I'll not continue but says that in each part H does not oscillate much H is a continuous function on the space of graphons it has sub graph densities okay so so now you want to so for Z you have to sum up the contributions of each of these parts now if this part is close to a particular graphon then the then the probability then the probability that the one about two to the n measure so Z is sort of a tilting of this uniform measure one of a two to the n measure maybe to the n square actually because the system size is n square so for each part the contribution from the exponential H of W and the probability that this one about two to the n square mass if the uniform measure puts here roughly comparable to the entropy of W with respect to one half and so of course there is a sum involved but the number of terms in the sum is so small that actually the part that has the biggest contribution dominates so this is not a very uncommon thing and and and this is what sort of comes out of this large division theory this is actually basically similar to an application of Ardan's lemma given this large division so I won't say much about it the whole point is I if I have large division information I can use that to understand exponential random graphs but the interest but one thing that happens though is that if beta i's are all positive so this is proven by chatter gene darkness in 2013 relying on this previous work of chatter gene waradhan and in the same paper it was proven that if beta is all positive then actually nothing very interesting happens so then the optimizers in the above problem are constants so it turns out that exponential random graph if all the beta is a positive actually looks like maybe a mixture of it or 20 graphs and then has entropy which is exponentially in N or maybe N log N because there are N vortices the total possibilities of relabeling is exponentially in N log N because it's at most n factorial whereas all the exponential skills at which you are sort of dealing with here are exponentially in N square so really the relabeling actually does not really affect this sort of consideration so basically everything is exchangeable so any graph is relabeling will also have the same probability under this measure does that answer the question roughly yeah yeah so everything is happening at a much sort of larger entropy scale okay so this means basically that ERGM looks maybe a mixture of it or 20 graphs so not super interesting but there are interesting questions we asked nonetheless here this was actually made even more quantitative in a recent work of Eldon and Eldon grows in I think maybe 2018-19 I don't remember exactly the dates but okay so if data is all positive then things are roughly like it has 20 graphs but there could be multiple solutions and actually you can ask like so then there was a sort of paper actually earlier than George and Dakman is by Shankar Vamedi, Guy Bressler and Alan Sly sorry which looked at high and low temperature ERGM so I won't actually make precise what this regime is but roughly what it means is in the high temperature there is a unique solution this is not literally true but roughly multiple solutions right so I claim that the optimizers are all constant but there could be just one unique constant solution or there could be multiple constant solutions and they proved that in this case in the high temperature case many nice things happened so fast mixing of global dynamics your slow mixing fast mixing meaning order n squared log n where n is the size of the graph here exponential in n but let's look at this regime for the moment so here it says roughly that there is one additional graph so there is one p star so that ERGM looks like okay so this has to be one has to sort of talk about what this means looks like an additional graph with density p star but in what sense so the strategy diagonally result or even the Vamedi Bressler slide result tells you that this is close in cut distance so close in cut distance but you can actually ask more like if you actually expect things to be like an additional graph you can actually ask questions about let's say concentration of measure central limit theorems of observables like let's say central limit theorem and the number of edges in the graph so of course in an additional graph everything is an independent edge of course you have a CLT but if you expect ERGM to be close to an additional graph you can actually ask things like this and these are actually mostly open so in recent work with a student at Berkeley who's now starting a postdoc at UCLA so with ERGM so we showed concentration of measure actually Gaussian concentration of measure I'll not make precise statements because I don't have the time for Lipschitz functions and this is roughly using Stein's method of exchangeable pairs and for the second question we show partial CLT so what we show is if you take edges E1, E2 Em well let's say m is little of n so maybe one of the big question is let's say to prove central limit theorem for the total number of edges but let's say you take some smaller number of edges which is little of n n is a graph size and also let's say their vortex disjoint so no two edges in this class share a vortex then indeed satisfy a CLT so whether this edge is present or not satisfies a CLT as n goes to infinity so but it's open to you so an open problem of I think much interest prove a full CLT is the statement clear? I mean it was not super precise but the point is you expect things to be closer in at a 20 so things should roughly have some decay of correlation so you should be able to sort of expect some Gaussian fluctuations that we already proved but to prove actually a central limit theorem is much more delicate but we can only prove it for a smaller number of edges and things are sort of not super because the point is the correlation estimate that one has actually changes when two edges share a vortex versus when they don't and so that's why we really need edges to be vortex disjoint okay so I'm actually by CLT you just mean for the number of edges in the graph yeah I mean so I mean you can look at other variables but the first step is to just sort of just look at the number of edges and prove a central limit theorem for that yeah prove a full CLT yeah that I should make the clear yeah thanks okay so that's what I think I wanted to say about sort of the dense settings I mean there are versions of this even when P goes to zero and there are structural theorems that actually appearing this work of Eldon and later Eldon and later Eldon and Gras but let's sort of revert back to our original question of large deviations so I mean as we have already seen large deviations and exponential random numbers are not actually that different like you can actually if you have very good understanding of exponential moments you can actually use that to get some motivation bonds but let's sort of just go back to our original setting so let's go back to this phi H and P delta quantity which we saw when P was fixed was constant in some regime and then maybe none constant in some regime so now we will consider the case then P goes to zero Frank is asking if the solution is known close to the crossover line in the LZ problem for what P fixed no I don't think for P fixed anything is known I think there are some related problems in external graph theory so these are actually not the same problems but they are of similar flavor and I think Raspbrab actually does have some understanding around some phase boundaries using his theory but but in this concrete setting I don't think anything is known but there are things in external graph theory where some progress has been made in some way okay so P will go to zero with them for the rest of today's lecture and tomorrow's lecture so first observation constant function is never the solution okay so what do you mean by that so I have this optimization problem I want to look at a weighted graph so that the sub graph density is bigger than typical and I want to understand what the best possible reverting is and the point is I claim that when P goes to zero the constant function is actually never the solution why is that true so let's see so okay right so if actually if actually the constant function was a solution then the constant function actually means that the total not only does the number of triangles or whatever sub graph you're interested in okay so when I say constant function is never the solution I mean when the graph sub graph that you care about is not trivial I'm like if the sub graph is literally an edge it's actually a constant function from a time testing problem so let's pretend that it's never like you're always talking about non trivial graphs so whenever the graph is non trivial meaning not an edge the constant function is never the solution because the cost of making the total number of so if you actually the constant function was a solution then it would actually imply that the total number of edges increase so in particular the total number of edges was typically let's say roughly n square by two and just two and now that gets boosted up to some one plus delta factor and it turns out this is roughly the cost is roughly e power minus n square p with some constant depending on delta it's sort of an easy check that this is what it will turn out to be now it turns out that there are better strategies let's say for a triangle you can actually put in my full graph and let's say I can pick a small part of the space of size NP maybe with some constant so that I put in all the edges here so I take an NP cross NP box and I put in all the edges there so how many edges do I put in that's roughly NP choose two but look at so let's ignore the two for the moment so it's roughly n squared p squared but very in a very compact question and so the cost of this is actually what I'm introducing n squared p square many edges each of them the problem of a particular edge appearing is p so the cost is p raised to the n squared p squared which is the same as e power minus n squared p squared log one by p now you see that this so you see that n squared p is actually much much larger than n squared p squared log one by p if p is small so this is a much more optimal strategy than increasing everything okay so so I want to actually so I'm running out of time so I want to actually sort of tell you what 5H NP delta is so one has to actually slightly bear with me so let's say H is a graph H is connected and so so this turns out to be a strategy to get this boost another comparable strategy is actually the following take some small part and make everything here one and then you take a small set of vertices I mean of course this is so small you also don't care maybe you can make this one as well so the point is here you took a small set of vertices and put in every possible edge in that small within that small set of vertices which means you planted a click so this is a click so what you're doing here is you take a small set of vertices and you're actually connecting them to everything else and also to themselves maybe and so this is like planting what we call an anticlic in our paper but subsequently has been called the hub and it's actually a very unbalanced complete bipartite graph so it turns out that these are the two dominating mechanisms when P goes to zero so so I'll give you the syntax for this but let's say it is connected and and so then I'll define what a max degree let's say delta okay and let's say eight star is the induced subgraph on degree delta vertices so if age is regular than eight star is age but if age is not regular if you look at the max degree vertices you can look at eight star eight star is subgraph that you induce on those vertices and and I define I H star X to be the independence polynomial I'll define what this is of eight star which roughly means the following this is summation I H star K X to the K summed over K so I H star K is the number of independent sets of eight star of size K so I have some graph I have the high degree once here the max degree once here and look at the subgraph only restricted to that that I call eight star in each star I look at the independent set so independent set for people who are not familiar so independent set is a set of vertices with no edges between them okay so I take a graph H I look at eight star I look at its independence polynomial and then I look at the solution to this equation so P eight star sorry I H star X so let's call theta to be the solution so delta was a boost that I wanted so I want to solve this problem I want to find the best possible revering which makes the H density in my graph to be one plus delta times its original density so I look at this number theta and don't solve that this is the same topic so phi H and P delta actually roughly looks like n square P to the delta log one by P the triangle case delta is two so n squared P squared log one by P and this is going to be two things so one is this is exactly theta if it's regular if H is irregular and it's a minimum of two things minimum of theta and one half delta to the two over the size of H okay so some complicated looking thing but it's not super important but so the sub so basically what it means is the following so there are two things here and the solution is the minimum of them so the two things actually correspond to this two constructions so the bipartite construction corresponds to theta if you do it optimally will get theta the click constructions will give you this it turns out that if H is not regular then the click construction does not make sense and you only have the bipartite construction in which case this is what you get so this is jointly with but a charia the betz key so in 2016 okay so so the regime of P must be mentioned so this is let's say for P much much bigger than n to the minus one over delta delta is the max degree of H okay so this basically says they have this two competing construction and depending on which one does better that is exactly what dominates this mechanism any questions um so amount of time so I'll I'll continue with this in the next lecture but I just want to finish with another related setting about so some graphs are polynomials of independent bits are another related setting is arithmetic progression in random subsets of let's say one to n or maybe even Z mod n so essentially what you have is you have some interval one to up to n you pick each entry independently with probability P so you get some random subset S which is a subset of one to n and let's say you care about the number of arithmetic progressions TK is let's say the number of arithmetic progressions of length K in S and so um so you can actually develop a similar theory instead of the sub graph is nothing very special these are also polynomials but it has instead of sort of tools from graph theory being handy here you'll have some arithmetic commentaries coming to play and and with the but a chariot for Nando show and if a show forget when maybe I don't know I would like to say maybe I don't know 17 maybe we um proved precise syntax for the corresponding um activation problem and it turns out that you're actually um things like this come out so let it to the following question um given m with subset a of Z let's say even of size M maximizes number of K a P is in a I mean like so actually it's also not unrelated to the sub in the sub graph problems actually things related to this is also um relevant like suppose I give you a number of I give you some number and I want to look at all graphs with this given number of edges which is the one actually which maximizes the number of triangles here the corresponding question is I give you some number and look at all possible subsets of size M on Z which is the one that actually maximizes um the number of K term arithmetic progressions in a so for K equal to 3 this was done earlier by green and Csack and the extended to all K and the answer is actually not very surprising so the interval is an extremal set so unlike the graph case where there are two competing candidates sorry the hub or the anticlic and the click in the arithmetic progression case planting interval is asymptotically optimal it's not to say that there are not other you can see that if an interval is optimal any dilation of the interval will also be optimal meaning that you can replace an interval which is an arithmetic progression of actually common difference one by any arithmetic progression and of the same size so that will also work but but those are of course of smaller order terms the number of such choices but the leading order in the graph case is dominated by clicks and anticlicks in the arithmetic progression case is is dominated by intervals and so I think I'm out of time I did not sort of talk about how to actually prove large division in this case I only talked about the variational approach of things variational sort of side of things and in tomorrow's lecture I'll sort of start with that hopefully finish it quickly and then move to how some of these ideas and maybe new arguments are needed to understand other sort of nonlinear functions of graphs like edge of the spectrum the largest and the second largest in the first few largest eigenvalues so I think I'll stop here yeah so let's give everybody the opportunity to unmute their mics and we'll give a round of applause to Shoshenda so so exet of solution 5
While large deviations theory for sums and other linear functions of independent random variables is well developed and classical, the set of tools to analyze non-linear functions, such as polynomials, is limited. Canonical examples of such non-linear functions include subgraph counts and spectral observables in random networks. In this series of lectures we will review the recent exciting developments around building a suitable nonlinear large deviations theory to treat such random variables and understand geometric properties of large random networks conditioned on associated rare events. We will start with a discussion on dense graphs and see how the theory of graphons provides a natural framework to study large deviations in this setting. We will then primarily focus on sparse graphs and the new technology needed to treat them. Finally, we will see how the above and new ideas can be used to study spectral properties in this context. If time permits, we will also discuss Exponential random graphs, a well known family of Gibbs measures on graphs, and the bearing this theory has on them. The lectures will aim to offer a glimpse of the different ideas and tools that come into play including from extremal graph theory, arithmetic combinatorics and spectral graph theory. Several open problems will also be discussed throughout the course. The lectures will not assume anything beyond familiarity with basic probabilistic concepts.
10.5446/55682 (DOI)
So, hi everyone. Before we get started with the lecture, I'd just like to give you a quick overview of what's happening in the next few weeks. So concerning, oops, next week it's going to be free. There is an event organized, the One World Probability Seminar next week, if you're interested. In two weeks there is the Clay School on Integral Build Probability. And oops, we'll start back again at the beginning of August. So from on August 3, 4, and 6, we'll have lectures by Nina Holden on the Schramm-Löwner Revolution and Imaginary Geometry. The week after on August 10, 11, and 13, we'll have lectures by Tom Hutchcroft on uniform spanning trees in high dimensions. Other than that, I just remind you how things work here. So everything's going to be recorded and live stream on YouTube. So we ask you to ask your questions in the chat and we can relate them to the speaker if necessary. We might have a break at midtime if you have questions and then towards the end of the lecture we'll switch off, switch off all the recordings so you can ask questions directly with your mic. And all that said now, so I'm happy to reintroduce Sjur Shendugeng Guliw, who's going to give his third and final lecture, his mini course on large deviations for random networks. Okay, so let me start by sharing my screen. Okay, good. All right, so, good. So, yeah, so today's agenda would be to sort of complete the discussion from yesterday. So how to prove the validity of the variational problem. For LDP. I'll sort of talk about methods and then also sort of see how to study large deviation properties for spectral statistics. Okay, so just to recall, so we saw phi Hnp delta, which was a solution to this variational problem, optimal entropy cost among inhomogeneous random graphs. We saw this is a symptom. So it had to depend on whether H was regular or irregular. So we saw that phi Hnp delta divided by n square p to the bigger delta log 1 by 2, where we call that delta was a max degree of H. So we saw that the limit of this was one of two things depending on whether H is regular or not. Theta and mean theta and one half delta to the two over size of H. Now, and I mentioned that theta was the solution. I think I'll call it I. But this was the independence polynomial of H restricted to max degree of vertices. So some of what and also what I mentioned was the two cases like so in the irregular case, there's only one candidate. But for when H is regular, you have this sort of one of the two would sort of work out to be the optimal one. And they came from two constructions. If you think of this as the juxtapit matrix of the graph, one was constructing a small set of vertices of size, depending on delta, of course, times np. Sorry, n in the triangle because it is, but otherwise this. And the other possibility was you take some small set of vertices and you connect them to everything else. So it's a so basically in your graph, you take some small set of vertices and you connect them to everything else. Now, and this was this is the asymptotics for P much, much bigger than n to the minus one over delta. But delta is a max degree. So actually, below this threshold, the this construction actually ceases to work. Because the size of this red set of vertices actually up to constants like n times p to the delta. So you see that if p is much smaller than n to the minus one over delta, this is smaller than one. And of course, you need at least one vortex. So this things that have starts stops making sense beyond p being this small, this. And below that, for regular graphs, the solution to the additional problems still can be found. And it turns out to be just governed by whatever makes sense, which is the clique. And okay, so I don't want to spend too much time, but just broadly, the strategy of proof. Okay, why does sparsity helped like we could not say anything such resize. So recall for dense graphs, we could not say such structural things. And the reason is, it's an interpinimization problem, right? So it is an entropy minimization problem. Now, more precisely, this is you sort of you have a bunch of quantities IP of q i j. And then you sum over all i j. And then this q i j is the weighted matrix q which satisfies some conditions. So we know it's convex function as a relative entropy, but turns out that when p goes to zero, it admit some nice polynomial approximations. So the key word here is when p goes to zero, IP of p plus x admits nice polynomial approximations. In particular, one particular like to give you a flavor of the kind of statements that turn out to be true. So IP of p plus x, if p is going to zero, you have this lower bond that it's bigger than up. There is a hidden constant here, but the constant tends to one as p goes to zero. So you sort of reduce your entropy minimization problem to a reasonably tractable polynomial minimization problem or an optimization problem involving polynomials. And then the strategy brought strategy in solving to solve sorry to find the entropy functional you. Right, so we want to find the minimum. So this is the minimization problem. So this is the minimum of all the infimum over all weighted graphs. So to find upper bound comes from construction. So if I want to find the best possible weighted graph to get an upper bound on the infimum, you can just provide a construction that works and the entropy of that construction will be an upper bound. And so the click and the anticlic and the anticlic or the hob will sort of do the job. So whatever I said before, the structures that turn out to be the dominant mechanism, if you so you can bear and construct them and compute what their entropy cost is, and that will give you the upper bound. The lower bound is the challenging part. And at a high level, separate or divide the graph to a high degree and low degree vertices. So I give you some arbitrary graph, which let's say you know has high sub graph density. So you want to show that the entropy cost of this graph is at least five or whatever is coming from these two constructions. So you want to show that there are the best possible that you can do. So you have to show that any other graph which can be a plausible candidate must have at least as much cost as the better of the click or the anticlic construction. And so to do that, you take any graph and sort of this is a delicate thing to do, but you sort of find the right threshold and you sort of separate out vertices whose degrees higher than some particular threshold that you sort of come up with. So you take the graph and you remove the and you sort of look at the high degree vertices and the low degree vertices. And, and sort of consider the contribution to the sub graph density from the two parts. And it turns out that if H is connected, if H the sub graph that you care about is connected, then it's more optimal to just have one part. So you take the graph you a priori can have some contribute. So you know a priori that of course this is all on top of the base graph P. So you have an additional P that you start with, you know that you have a high sub graph density. So you know that the edge densities cannot be smaller than P setup. Not does not make sense to actually make edges have smaller density than it typically is. So you have some base graph, which is like an additional degree graph and on top that you have some additional reinforcements. The additional reinforcement you separate into two parts, the high degree part of the low degree part. Whatever additional contribution to the sub graph density you desire and you know that this graph actually admits has to come from this reinforcement and the reinforcement that so so it some of it comes from the high degree part and some of it comes from the low degree part. It turns out that if H is connected, it actually is not optimal to actually have both of them coexisting. So either the high degree part exists or the or the low degree part dominates and this is what corresponds to the hub and this is what exactly makes this solution that you have a bunch of high degree vertices and they account for the additional density. Similarly, the low density part will be this. So you have some small set of small small part of the graph and you put in every possible edge there. So you plant a small click. So because the size of the graph that you plant is small, the degree is not going to be high, but they're so compactly arranged that you will have a huge boost in the density of subgraphs that you want. So this is corresponding to the sort of the low degree. And this is the high degree. This is at a very high level how the proof works like, okay, of course, there's a lot of delicate steps, including where what exactly do I mean by high degree, but I'll not go into that. But the connectedness is important. And if he's actually not connected, if he is disconnected, then one can have mixtures where you have the coexistence of both the click and the anticlic with suitable sizes so that the eventual thing is the most optimal that he can do. Okay. Any questions? Okay, so this is why sparsity helps and most of we don't have such structural theorems in the dense case, where the only proof technique is still again some polynomial approximation by Helder, but that's sort of the sort of crude level that you cannot sort of say anything meaningful other than maybe in the case of regular graphs, where exactly the constant function is optimal solution as we sort of talked about yesterday. So of course, this indicates and this is actually also known, but this does not immediately imply that this indicates the following geometric fact is true. Conditioned on high density. So if you condition on your graph G having a high density of ages, it is likely that G has a click or an anticlic close to the sizes mentioned. Now, okay, so conditional properties are delicate and all of this is happening at an exponential scale. So you really need to have a very good understanding of what the conditional measure is to be able to make a statement like that. And so this sort of entropy approach does not quite yield this stronger statement. But the work that I mentioned yesterday by Harrell, Mossad and some voltage would take a more common material approach to proving large deviation, which I'll also lead to soon. They indeed prove a statement of the following kind, they prove similar statement for and then following their work. So these people sort of had sharp results for clicks. But then following their work, like I mentioned yesterday, Mossad can also settle the problem for all regular age up to the optimal threshold. And it turns out that for irregular graphs, it's actually significantly delicate. Already one just to sort of give you a flavor of the kind of things that can happen for various sparse P. So just look at four cycles. So for sparse P, homomorphisms and isomorphisms behave differently. So for example, if you took a four cycle, and you look at copies of the four cycle inside your graph, so roughly expect it to be of order n to the four. So you choose four guys, n to the four, and each of them occurs with property p. So it's p to the four. So these are the rough order of number of copies of the four cycle in a graph. But you could, if you look at homomorphisms instead, you could actually just get by just having a structure like this. Because then you can actually identify this vortex and this vortex. So if this two vortices map to this vortex, then you get a homomorphism of this graph into the bigger graph. And the number of such structures is n cube three vortices p square. Now you see that when p is super small, n cube p square is much, much bigger than n to the four p to the four. So when, so this basically means p square is much, much smaller than one by n, or the same as saying p is much smaller than one by root n. So you see that as p becomes sparser, you have to be really careful because subgraph counts, homomorphisms, these are all different notions and they sort of have different asymptotics. And so this work of Basak and Basak were actually looking at the first kind, where you really look at the number of isomorphisms of the regular graph into your big graph. But just to sort of give you a flavor of how things can sort of go wrong when p is super small. And it actually turns out that the even sort of leading order behavior for the property is not very well known for irregular graphs for very sparse p. Okay. So any questions? Okay. So this was all about the additional problem and how to solve it. But like I said at the beginning, how does one prove its validity? Like how does one prove that it is indeed what governs a large deviation? And so recall that in the dense case, we approximated any graph by a block graph with, crucially, with number of blocks not growing within. I'm like, at the end, the proof that we actually gave actually allowed various sort of slow growth like a polylog or something. But just generally the view was like if there are too many plots and the approach would fail, used cointossing properties to bound the probability of looking like a given block graph followed by union bound. And this is the crucial part which will fail unless the number of blocks or the number of things that you approximate your graph by is not too huge. And the bonds that we got, it did indeed blow up very quickly preventing this sort of naive approach to work when P goes to zero at a polynomial rate. And so I'm like, but the general principle should stay the same. So then the strategy should be, come up with more efficient covers of the space. So recall, I mean, like the original approach of approximating any graph by a block graph actually was an overkill because it's really, well, it was actually attempting to prove the full large deviation of the iterative measure on the space of all graphs. Whereas the actual thing that we care about is only the large deviation properties of a particular random variable in this particular case, sub graph density. And so one might be able to sort of come up with more efficient notions of covers just tailored for that particular application. So I'll actually give a quick flavor of a particular argument of Nick Cook and Amit Dembo which uses spectral properties of the graph. But before that, I'll just sort of want to still sort of talk a bit about how to maybe generally approach the problem. So okay, so just sort of a possible general approach. And this is indeed made to work in sort of a really breakthrough work of Chatterjee and Dembo, 24 maybe, and then that was refined and various other notions are sort of put forward by El Dan and Ojiri. Maybe around, I don't quite remember the exact time, but maybe around 2018. Okay, so what is the approach? The approach is to go back to the Gibbs measure framework. So recall that one particular, which is the same as basically saying compute exponential moments. So if we can compute exponential moments sharply, then you can use Markov or there are other ways to actually use sort of precise understanding of exponential moments to get large deviation bounds. So in particular recall what I mean by this. So you have some random variable z. Let's not call that z. Let's maybe call that f. Let's say from the hypercube to the reals. So I'm working with the hypercube of size n whereas actually in the application, if you want to work on graphs, you should think of graphs as elements of the hypercube of size n just two, but I'll not do that. So for the moment, we sort of work in an abstract setting where I just have a general function from the hypercube of size dimension n to the reals. And let's say I want to understand. So of course, this induces a measure on the hypercube where the measure of a particular configuration is proportional to e power f x. And so of course, of course, to make it a probability measure, you divide it by the partition function. So z is nothing but summation e power x. Now this is exactly the setting that we saw yesterday in the exponential random group. And we're talking about exponential random graphs. F was again some subgraph counting type function. But there because we're talking about the dense setting, there was already a very proven large division theory. And then we use the large division theory to actually compute z. Whereas here, the point is to go the other way, the point is to actually come up with a technique to compute z to then understand large deviations. Okay. So maybe actually, sort of just to make sure that this is like an expectation. Let me also normalize it to the end. So this is basically an expectation of e power f x under the uniform measure on the hypercube. Let me define z to be like that. So of course, then this is not quite an equality. But I think this would be easier. So just right proportional. Okay. So goal is to compute z or approximate z. Now, as I already told, like once you have exponential moments, you can actually use that to understand large deviations. But in principle, like so in particular, this is literally the probability of a particular set. So if if, I mean, this is a drastic example, but this is not what we will eventually use. But so probability of a, let's say under the uniform measure, one over two to the n is z. If f is zero on a and minus infinity of a, right? So, of course, f has no smoothness and whatnot. But if f was actually zero on a set a of the subset of the hypercube and minus infinity off of it, then e power f will of course be one on the set a and zero off of it. And so then this is nothing but nothing but the probability of a under the uniform measure. And f is not smooth. But maybe you can sort of modify f and then sort of play with the strategy once you have a basic framework to compute such things. Okay, so, so gives variational principle. So there is actually a general recipe to compute this. So log of z is actually soup. It is a super optimization problem. So you take soup over all new I will say what new is expectation of f under new minus the relative entropy of new with respect to mu mu is let's say the uniform measure, mu is the uniform measure basically which is putting which is putting mass one over two to the n on every vortex of the hypercube. And so what is new new is all all measure. So the supreme new any measure on the hypercube. So log of z is you take any measure on the hypercube call it new. Look at the expectation of f under new and then look at the cost of the entropy cost of new with respect to mu. Now this is actually not very hard to show. And so this is an exercise exercise would be prove it. And optimization measure so and so hint would be the optimum optimum solution is actually new equal to new proportional to e power f. So I'm like so it turns out that this supreme is attained when new is actually the measure by but for every other measure, this is actually a lower bound. Okay. And it just follows from sort of convexity of relative entropy. And it's basically this is not unrelated to something that I've already said in one of the earlier lectures that the log momentary function and entropy are sort of do also for each other. Okay, but this is a sort of nasty optimization problem because you're looking at all possible measures on the hypercube. And so the mean field approximation is when restrict new to product measures, which basically means every coordinate is independent. So this is a supremum over all possible measures. A lower bond is taking the supreme over all possible product measures. And so another exercise again quite easy to prove prove that this is actually obvious from what I just said, say f is linear. Then the optimal solution is a product measure. And that's actually obvious from what I said here. So I said that the optimal solution for any f is has density proportion to e power f. So if f is summation e i x i, then of course, exponential of a linear term factorizes and all the coordinates behave independently. Okay, so so the main field approximation is tight when f is a fine. Now, the whole point of this nonlinear large division theory is to sort of understand what classes of f actually admit a reasonable nice mean field approximation. So of course, it's not going to be tight if f is nonlinear. But maybe if f is approximately linear in some suitable sense, then maybe if you just approximate by product measures, you are not doing too bad a job and maybe you are doing sort of a reasonable approximation. And so so under what conditions is mean field is a meaningful strategy reasonable. And so this was so in challenge in tempo. This was a view that was put forward and then they came up with some conditions. So two conditions. One is low gradient complexity. Basically, this means if you look at the gradient of f at x for all x in the hyper cube. So think of actually f was initially supported on the hyper cube, but let's pretend that f is actually a smooth enough function so that it can actually extended on the full continuous hyper cube. And so otherwise you can talk about discrete gradients, but let's for the moment just think of f as a smooth function. So then if you look at the gradient of f at every vertex of the hyper cube, so you have a bunch of two than many vectors. If f was actually linear, then this will be just one vector. So low green complexity conditions as basically the this set as small metric entropy or small a small covering number. So this is the notion of approximate linearity. So when the set of gradients is not too big. And then there was an additional second order condition, second order smoothness, which was basically looking at sort of double derivatives of Fij and saying that they are not too big. So the infinity norm of second derivatives is also uniform this month. So which also basically sort of points to the function being close to something your function at least piecewise. So these were the two notions under which strategy and Denver proved that if the function f satisfy those two conditions, of course, there's a huge number of quantifiers involved, which I'm not getting into, then this can be actually approximate by a product measure. And this was then used for functions which were smoothenings of this to prove the large division principle saying that this variational problem involving entropy is indeed what actually governs the large deviation. So when basically new is a product measure, this the relative entropy of new with respect to mu. So mu is of course a product measure because it's uniform and new. If it's also product measure, then by tensor addition of relative entropy, you essentially get the optimization problem that we had before minimum over all possible sums of entropy. So the sums of entropy is the sum sort over all the indices. Now, because of tensor addition, this is what you get actually. So it's not super important, but the key is that that problem is essentially same as saying that this problem has a nice mean field approximation. So this is what strategy and Denver did, followed by a done who actually improved the dependence and also sort of there was no second order condition, no second order condition. So got rid of that. And the first one, instead of having a small covering number, he replaced it by small mean Gaussian width. So small covering number means that you can have a small set of vectors, which forms an epsilon net on the set. Small Gaussian width of a set. So Gaussian width of a set K is the maximum projection that it has on a random direction. So it's soup over X and K. So the Jesus standard Gaussian vector. And it's actually well known that these two notions are related. One is actually usually refinement of the other. You can actually, if you have a bound on the entropy number, then you can use chaining to to bond the Gaussian width. But it's generally an easier thing to compute the Gaussian width. So this was what I'll done it. And then, finally, a sherry subsequently used mostly arguments of convex analysis to get further improvements. And in particular, for the case of cycles, brought on the partial threshold to this one over root 10 that I was mentioning. So I'm not going to that. But these were basically the three works that tried to make this approach precise of computing the partition function and then using that to get notifications. And the computation is possible if the function is approximately linear in some suitable sense. And the approximate notion is that the set of gradients is not too big. But you could also take a slightly more direct approach, which is closer to what we did for the dance graph. So just approximate instance. So here, you're trying to compute the partition function, and then you want to sort of work with a function which looks like an indicator on a set. But you could try to do the more direct approach of approximating graphs by block graphs, but in a slightly more efficient way. And this is what was done by Nick Cook and Dembo to go more direct approach. Discover graphs by block graphs. So using spectral properties. So I'll quickly try to at least give you a flavor of what their argument is. So the starting point is this observation. So let's sort of define some notation. So let's say Pp is the product Bernoulli measure, Bernoulli P measure on the hyper queue, which is our base measure because we've been working with additional p graphs. And so this is the key starting point or any closed convex set K, RN actually. Actually, we can, of course, our measure is only supported on the unit hyper queue, but okay, so you can work with this, then you get an upper bound where IP of K is the best possible inf y in K, IP of y. So essentially, this is going back to the original sort of original things that we had. So it turns out that for any close convex set by convex duality, you can actually get a bound like that without any sort of correction term. So property of any convex set under this product measure is you take the best possible vector y in K, look at its relative entropy with respect to the product measure, and that's basically an upper bound. And also, you can do some tail editing ideas. Okay, maybe not clear because the infimum might be a continuous point and you're, yeah, anyhow, so this is an upper bound and of course, there are already strategies to get lower bonds by tilting, but for the upper bound, I mean, this is what will sort of suffice for our purposes. Okay, and so now, so recall always the proving the upper bound on the property is the challenge because that's where the union bound is. The lower bound is typically tilting or some very bare hands construction. So the strategy is to cover the space up to maybe an exceptional set by close convex sets. Key like above with the additional property that the random variable of interest, let's say x, which in our cases is a subcraft count, does not oscillate too much on K. Or I'm like just to be completely sort of setting more concrete. So you have a family, let's say, of right in our original application of this regular dilemma, we took the space of all graphs, and then we covered them by block graphs, or balls around block graphs and said that because of this cotton arm, and the subgraph can't be a continuous function. In the small balls, the subgraphs don't ask the subgraphs does not change too much is exactly the same principle. But here you are being more flexible with the cover that you come up with. It turns out that because of this a plurie upper bound, it sort of suffices to come with any close convex sets, which says that it's not too much so that it actually allows union bound. And then again, you want your random variable to not ask that too much on each of the individual elements of this cover. And so I'm using spectral properties of adjacency matrices cover in the operator norm. Suffices for triangles at least. So if I have a bunch of matrices, all adjacencies matrices, if I can actually come up with a net, which actually is a net in the operator norm, then it turns out that that will also have the property that on each of this norm balls, the subgraph count will not change to other triangle count will not change too much. And this and this is obtained essentially by a spectral projection argument. By which I mean you take your matrix and you project under the largest few eigenvalues that gives you a lower rank approximation. And then you sort of construct a net on that space. So I'm sort of I don't have too much time. So I want to go into some spectral properties, which was the original plan maybe. And then one more comment though, for arithmetic progressions, let's say for K equal to of length three, which I was talking about earlier, one can construct covers using connections to a foreign analysis. So it turns out that there is a very beautiful connection between APs of length three and four analysis on Z mod n. And using that actually you can construct efficient covers. And for higher length of arithmetic progressions, you actually need to do more. The standard four analysis actually does not work anymore and you have to do some random sampling to actually get reasonable covers. But so this work of Harrell, Samotich, Mossad and Samotich, and then followed by also Bosaq and Basu, they actually take a more combinatorial approach. So cover graphs according to presence of combinatorial structures, which are which they call seeds or core. So these are basically structures that actually facilitate the large deviation event. So if you, I mean, this would be like the hub or the clique. So if you have some, these are basically structures in a graph. So if you condition on such presence of such a structure that actually boost the expected count of particular subgraph that you're about. And so again, so all of these approves are basically at a very high level similar. You want to construct an efficient cover. It could be either spectral, it could be either combinatorial, or if you sort of go back to this partition function approach, then you actually cover the space of gradients. And yeah, so this is all I wanted to say about this. So quickly in the remaining time, I want to sort of talk about a different kind of observable rather than a subgraph density, which are spectral statistics. So in particular, so let's say lambda one bigger than lambda two are the eigenvalues of EG where this is the induces matrix of G. So it's known that lambda one typically when P is not too small, lambda one is typically like NP. And lambda two is typically like square root NP. So you can ask about what is the property, let's say, and so Cook and Dembo again, using covering by a fine sets showed that the mean field approved validity of mean field approach. And then with the using our previous understanding of cycles, so recall that lambda one bigger than one bigger than one, so because the cycle counts are connected to traces of matrices. So if you look at a cycle of sizes, then if the larger second one is big, that implies that the number of cycles of sizes is also big. And we have already understood the variational problem solution for sub graphomorphisms, which was either a click or an anticlic. And actually turns out that these two events are roughly the same if s goes to infinity. So certainly largest large I can only place large sub cycle count, but then these are approximately equivalent if s goes to infinity. So using previous work just with Patecharia and Lubeski and Zau, which I mentioned yesterday about the solution to the variational problem, you actually get the following answer. So log of the probability of this event, let's call this event a, maybe minus of that divided by n square p square log one by p is again a minimum of two things. So this again corresponds to a click, a hop construction. And this is a click. And so this, the click is slightly easier for the hop construction. Recall that the corresponding thing for the cycle, there was a click and hop, and the size of the hop was given by the independence polynomial of the cycle. And it turns out that for cycles of size s, the independence polynomial actually satisfies a very nice recursion, which is this. So the independence polynomial of a cycle of size s satisfies a recursion, which is close to JB Chef type recursions. And you can actually use that to really exactly compute what this polynomial is and compute what theta would be. Theta was a root of a particular polynomial equation. And then you want to take the limit as s goes to infinity. So this is what actually goes into the proof. So, so actually in the Cook-Demba result, they proved a general result about the largest eigenvalue of an Adder-Schrinning graph, but preceded by any deterministic matrix. So lambda one of any deterministic matrix where B is deterministic and A is the distance matrix was actually treated. And so by taking B equal to the constant matrix P, one can actually actually also prove sharp large deviation tails for the second eigenvalue. Because typically the first eigenvalue of often a dictionary graph, as I said before is NP and the eigenvector, the corresponding eigenvector is basically all ones. So essentially by subtracting of B, you are essentially, so A minus B, the largest eigenvalue of A minus B is typically the second largest eigenvalue of A. And so you can use that to also have a version of this result for lambda two, which is also what we prove with Potacharia. And I will not start right out the full statement, but here the clique is the only thing that click clicks don't make sense. So hubs don't show up. But there are some caveats and I don't want to get into technicality, so we cannot actually full a full large deviation because we only work in the regime where the largest eigenvalue is around P. And so that's actually needed for this proof. So I'm already out of time, but just in the last minute, I want to say that when P is very small, actually when so to be precise, when P is less than maybe log n, sorry, NP is less than root log n by log log n. So which basically means P is like some poly log by n. Then the edge of the spectrum, meaning lambda one, lambda two are governed by high degree vortices leading to localized eigenvectors. Meaning if you have a matrix, typically for reasonable values of P, the total the largest eigenvalue is NP and this is because the total number of edges is n square P. So lambda one typically NP because total number of edges roughly n square P or n just two P. However, when P is very small, when P is very small, let's say when P is C over n, when the average degree is constant, then NP is a constant. However, you actually have some very high degree vortices in the graph just because of fluctuation. And so these have values roughly log n by log log n. So there are high degree vortices of this size and so of course, these are the ones that actually govern the largest eigenvalues because the largest eigenvalue of just this structure, this graph of degree D, let's say if you have a star graph of degree D, then lambda one is actually like root D. So in this case, actually, these are the ones that actually govern the spectral edge. And so with recently with the Barsher-Hutte-Charia, Shoham-Hutte-Charia, we proved a large deviation principle for the edge of the spectrum, both upper and lower tails, meaning what is the problem? The lambda one, lambda two, lambda k is bigger than their expected values by some multiplicity factor or smaller by a multiplicity factor by reducing to a large deviation principle for the extremal degrees. So basically what we say is roughly at a very high level, if the largest eigenvalues are large and the largest few degrees has to be large and vice versa, although there are many quantifiers and I want to, I will not state the statement, the actual statement appears in the lecture notes that I've posted. But yes, I think I'm out of time and so I'll stop here and I'm happy to answer questions. Okay, so I think everybody has the opportunity to unmute themselves. So I think we should give a big round of applause to thank Shoham-Hutte-Charia for his lectures.
While large deviations theory for sums and other linear functions of independent random variables is well developed and classical, the set of tools to analyze non-linear functions, such as polynomials, is limited. Canonical examples of such non-linear functions include subgraph counts and spectral observables in random networks. In this series of lectures we will review the recent exciting developments around building a suitable nonlinear large deviations theory to treat such random variables and understand geometric properties of large random networks conditioned on associated rare events. We will start with a discussion on dense graphs and see how the theory of graphons provides a natural framework to study large deviations in this setting. We will then primarily focus on sparse graphs and the new technology needed to treat them. Finally, we will see how the above and new ideas can be used to study spectral properties in this context. If time permits, we will also discuss Exponential random graphs, a well known family of Gibbs measures on graphs, and the bearing this theory has on them. The lectures will aim to offer a glimpse of the different ideas and tools that come into play including from extremal graph theory, arithmetic combinatorics and spectral graph theory. Several open problems will also be discussed throughout the course. The lectures will not assume anything beyond familiarity with basic probabilistic concepts.
10.5446/54056 (DOI)
Can I say it, everyone? Yes. Okay. Can you hear me clearly? Yes, I can. Okay. So now I can get started. Hello, everyone. And it's my pleasure to present this work in CanQ. And the title is the Equilibrium and Social Optimality in Qs with Service Read and Customers Joining Decision. Okay. This is kind of joint work with the reading can of Yan Shan University, China. So this presentation has the following parts. We start with the introduction, some literature review. And then I presented the model description. We talk about the information scenarios. And I showed the numerical results. Finally, I make a conclusion. Okay. First, I want to talk about the system I'm going to focus on. It's a Q and model with three features. The server can work at two different service rates, high or low service rate. And customers can make their decision of joining or walking based on their service utility. And thirdly, the Q Lens information may or may not be available for customers. Okay. So this model is motivated by some service systems in practice. Those systems usually can adjust their service capacity during the operations. This adjustment usually is based on the congestion level. Okay. There are many examples like border crossing station between U.S. and Canada. And the hospital ER rooms, some highway toll plaza and call centers, tick is Q's. Sorry. So there are many examples. Okay. So now I want to give you a brief introduction about literature here. So where this work is positioned in the literature. Okay. The related literature is in Q in area, certainly. And for the multi-server setting, and I studied the U.S. and Canada border crossing stations. The number of servers can be adjusted based on the Q Lens. And so that's a multi-server setting. And here in this paper, we focus on the single-server setting. Okay. This kind of Q in system belongs to the vacation Q in models. Okay. In particular, the Q Lens, the Q Lens, the Q Lens, the Q Lens is the single-server setting. And what I'm going to talk about in particular, the model I present here is kind of working vacation model. Okay. So this means that the server can reduce the service rate from the normal rate to a low rate when the system becomes empty. Okay. So the low service rate, instead of zero service rate. Over the past 10 years, several researchers in the Q in area studied the equilibrium joining or blocking strategies in the M1 Q setting. Okay. With working vacations. Okay. However, here I want to make clear those vacation, working vacation models. They actually focus on a special case of threshold value is one. That means there's, whenever there's a customer in the system, the server adjusts, adjusts the rate to the normal rate instead of the threshold. And Guo and Heisen in 2011 and 2012, they studied, for the time, the un-stretch hold M1 Q in models with the joining strategies, like with the customer choice. But that's not working vacation. Working vacation means the service rates are never zero. It's just between low and high. And the Guo and Heisen's work is between zero and new. That means it's like a regular vacation, not working vacation. But they consider the threshold policy, like N can be greater than one. When they resume the normal service rate, the Q lens must exceed a threshold M. Okay. So far, there's no study on the threshold policy with server working vacation. Like the server can switch between two rates, like low and high rates with customer choices. So our focus is to study the Q-ing system where the service rate is adjusted by the server between two rates, low and high, with a threshold policy. And customer can make their joining or blocking decision based on the Q lens and their service utility. Okay. So the setting here, I think, is quite interesting because we consider both server and customer decisions. Okay. So we want to investigate the customer equilibrium strategies and the social welfare maximization strategies if we consider the decisions on both sides. Okay. That's the unique feature of this study. Okay. Hopefully our study can provide some useful information for Q-ing systems where the manager can make their decisions and consider the customer choice. This picture quickly shows the system with study. So you can see that the customer can either join or block based on Q lens or long-term average waiting time. And service rate can be adjusted between mu naught and mu one. Mu naught is a low rate. So we start with the normal rate here, service rate mu one. And when the system becomes empty, the rate will be adjusted low to mu naught, just down to mu naught. And then this mu naught rate will be kept for the Q until the Q lens exceeds N again. Okay. And then the service rate will be adjusted back to mu one. Okay. So that's a picture here. And here the service decision, we can use three variables or three parameters. Mu naught, mu one N. And the customer's decision will be just the joining or blocking. Okay. So that's the picture we can illustrate this situation. And the system state here, certainly we use the number of customers in the system at time T as the one variable for system state. The other state variable will be the server's status. Okay. Like zero represents the low service rate or the working vacation period. And one represents the regular service rate. So we're going to have the two dimensional continuous time markup chain. And it's kind of a queuing system we studied. Here I want to present two information scenarios. Okay. Either the observable queue or the unobservable queue. Okay. The first we start with the observable queue case where the Q lens and server status is closed to customers. So they can make their decision based on the observable Q lens. And make their decision. And then the state space is pretty clear. And we are going to end up with a finite state CTMC. Okay. We can actually obtain the close form expression for the stationary queue lens distribution. Okay. That's quite lucky. So fully observable M M one Q setting. With a two service rate. Actually we can work out the formulas for the stationary probabilities. Okay. We use a piece of K one piece of K naught to represent the joint distribution of the queue lens and the server status. And I'm going to skip the details of the mathematical derivations and just to present the distribution formulas. Okay. So here N sub E is the equilibrium customer equilibrium threshold. That means the network of the queue lens is less than equal to Q E. Sorry. And the server will join the queue and otherwise the server will block. Okay. So this we assume that the equilibrium threshold or the we call the blocking threshold is more than the capital N, which is the policy threshold. The servers policy like the read adjustment threshold. Okay. Then we can get this expression. And also we can get the, the probability of server status P naught is the low rate probability and P one is high rate probability service rate. And certainly we can work out the performance evaluation because the distribution of queue lens is obtained. So we can also get the close form expression for the queue lens. Here I also owe is the average queue lens for the observable case. Okay. So we can also develop some cost of functions and social welfare functions. Okay. And here we consider the cost of structure waiting cost service operating cost in the high service rate service also. So it's our servers operating cost for the low service rate period, high service period, blocking customer cost or the switchover cost. So we can consider all those cost relevant to the system. And we can work out the expected cost rate by using the expected queue lens. And also the expected cycle time. And social welfare is defined as the total utilities for all customers joining the system because custom blocking will not have any positive utility. So there are, we, if we define N as the maximum queue lens allowed either by customers equilibrium or the social welfare maximization. So we can actually work out the social welfare threshold, social welfare expression. Okay. And so the social welfare maximization threshold, there are two cases. The threshold small N can be either smaller than the service rate adjustment threshold or it can be greater than that. Okay. So either case we can work out the performance measures. Okay. So the social welfare can be in this case where the maximum queue lens allowed is less than the policy threshold or it can be greater than. Okay. So the maximizer can be obtained. Okay. And for observable queue case is more comfortable. There is absolutely no more complex. Because here we assume that the customer cannot observe the queue lens. So the customer will join the queue based on the long-term statistics like long-term waiting time. And so with the probability of Q, the customer will join. And with a 1 minus Q, the customer will block. Okay. So this shows you the, the CTNC with the infinite stage state space. So again, we can work out the stationary distribution just like the observable case. We didn't use matrix analytical method. Instead, we used the method of solving the set of difference equations. So we work out the closed form, the expression for the major performance matrix. And then we can work out the individual utility for this case. And the Q can be determined by setting this individual utility to be zero. So we can determine the Q. So here we need to first work out the performance matrix with unknown parameter Q. And then we set this individual utility Q to be zero to solve for the equilibrium joining probability. Then we can again work out the major performance measures, like total cost, social welfare, etc. The interesting thing I want to show here is that the equilibrium strategies for the unobservable Q case are very complex now. We can observe some interesting Q phenomena, okay, which we cannot observe in the previous studies. First in this case, so if we set the parameters in this way, like with a stable Q for the low service rate and also for the high service rate, okay, if you look at this setting, lambda is total arrival rate, mu1 is the high service rate, mu0 is low service rate, C is the waiting cost parameter. And in this particular setting, and you find that most of the time the system is in the low service rate status, so it's a very like a classical M1Q setting. So then the WQ, the waiting time, will change with the Q as a linear function. This is very intuitive. That means in the Q-in literature, Q-in economics literature, they also call this is called kind of a void crowd, okay, so it's increasing function. And then you can see we have unique equilibrium joining rate or equilibrium joining probability. And for another setting, so if we have like an unstable Q for the low service rate, okay, in this case, and the stable Q for the high service rate period, then you find the waiting time function is totally different now. So we have a decreasing part, an increasing part, and this is kind of the first part is called a follow the crowd, and the second part is a void crowd. So this Q-in behavior has been observed in Guo and Heisen's study in 2011, published in operations research. So in that paper, they observed that kind of behavior. But in our setting, because we allow the server to serve customers when the Q-lens is low or empty, so that's different from their model. Their model is like only one positive service rate, we have two service rate, then we can actually observe even more interesting behavior. So for this parameter setting, you can see that we have even more possible equilibriums, okay, because horizontal axis is the service reward, okay, so as equal to WQ to get the equilibriums. So in this case, our low service rate period again is unstable, high service rate is stable, certainly it must be stable here, okay, to make the stationary distribution exist. So now you can see this moderately change the service rates, the frequency is moderate, it's not very high, very low, then you can see even more interesting behavior. And here we have two stable equilibriums, but the larger one is better because this one can maximize, can make the social welfare greater, but the middle one is unstable equilibrium. So now you can imagine that we have two service rates, so we can observe even more interesting equilibrium behavior. So we make a conjecture, okay, so if we have like a multi-service rate with a kind of multi-stretch hold policy, then actually for this kind of observable Q case, if the customer can make their choice, then actually there's a possible like maximum of 2M minus 1 equilibrium points with the largest one is stable. This is for the multi-stretch hold policy, okay, so that's for the more general case. So this, we cannot get analytical proof at this point, but it's a very possible case, so that's why we make this as a conjecture. Finally, I want to present some numerical results, okay. This picture shows you the observable Q case, so we start with observable Q case followed by observable Q case. So this shows the threshold policy changing with the, sorry, the customer joining threshold policy, changing with service rate threshold policy here, N, okay. You can see N star is the social welfare maximization threshold, the 45 degree line here, sorry, the horizontal line here. This is the customer equilibrium threshold, so you can see that as N increases, the policy threshold for server increases, the customers joining threshold, also for the social welfare maximization policies increasing with that, but the customer equilibrium threshold remains the same, okay. So this picture shows you how the optimal social welfare optimal threshold change with the service reward, because a small v is equal to r times mu1 over c, this is service rate r, not service reward, sorry, service reward, not service rate. Service reward is between 0 and 15. You can see how the optimal threshold changes with the service reward, okay, under different traffic intensities, and this one is the social welfare, this is not policy now, it's the social welfare changes, maximum social welfare changes with N, okay, and under different service rewards, okay, so you can see this, those pictures actually also presented in Guo and Heisen's paper, but we actually compare our generalized model with their model, because their model becomes a special case of our model, this is another case, here we show how social welfare changes with my policy parameter under different low service rate, mu now, okay, and this is the unobservable case now, how the social welfare optimal joining probability changes with N, you can see that, this is the unobservable case under different level rates, and here this is under different service rate, and this shows the social welfare for the unobservable case, and optimal joining probability, how the optimal social welfare changes with the threshold policy parameter N under different low service rates, okay, and finally we compare the two information scenarios, so the observable Q-case and unobservable Q-case in terms of the equilibrium social welfare and optimal social welfare, okay, and so we compared them in terms of the policy parameter, because here we consider N and the mu not the low service rate as the server's policy parameters, mu one, the normal service rate, we consider that as the given parameter, because that's the total maximum service capacity, usually we cannot change the capacity in operations, but mu not and N, they are like a policy operating decision parameters, so you can change them, that's why we focus on those these two parameters, okay, and here it shows the optimal social welfare changes with these two parameters, the two cases, but you can see that in all those numerical examples, the observable case can improve the social welfare, okay, and also in terms of the operational efficiency, we can also look at the optimal policy parameter for minimizing the cost, really remember I presented the expected total operating cost earlier, so we can minimize those parameters, we can minimize those costs by using the parameter N and the parameter mu not, okay, so for either case we can get the minimum expected total cost, okay, and again we can only do this numerically, okay, and this is the overall social welfare, that's the social welfare minus operating cost, so here we actually consider both the server and the customer's utility or their benefits, then you can actually find the optimal parameter N parameter N for maximizing the overall social welfare, okay, this is just a sample of extensive numerical analysis we did, finally I make a conclusion, okay, in this study we have studied the equilibrium strategies when both customers and servers can make their decisions, okay, so the equilibrium strategies are much more complex in the observable Q case than the fully observable case due to the interaction between these two types of decisions, okay, the deviation between the customer equilibrium strategy and socially optimal strategy can be numerically determined using the close form solutions, the effects of the system performance of policy system cost parameters to gain better understanding of the system behavior, some system behaviors were not observable in earlier studies and the total expect cost can be minimized by policy parameter N or mu not, and we can also use simulation to verify some interesting findings in this study, okay, future research we can extend analysis to the congestion based staffing policy in multi-server system with the strategic customers under different information scenarios, that's the conclusion of my talk, thank you.
We consider the customers' equilibrium strategy and socially optimal strategy in a single server Markovian queueing system with changeable service rates controlled by a threshold. When a customer arrives at an empty system, he is served by the server at a lower service rate. When the queue length reaches the threshold, customers are served at a high service rate. The optimal joining strategies of customers are studied under two information scenarios. The first scenario, where the server' state and the queue length are observable, is called a fully observable case. The second scenario, where the system state is not observable, is called an unobservable case. We analyze the steady-state distribution and performance measures of the system, and derive the equilibrium strategy. Finally, we compare the equilibrium strategy with socially optimal strategy via numerical examples.
10.5446/54058 (DOI)
And here you see again the title and my name as you see is Winfield Grasmund and I talk about the Lyme Monte Carlo simulation is done so frequently. Monte Carlo simulation is really one of the most successful applications of operations in the field search and even beyond. However the bulk of Turing theory uses deterministic methods that is methods not suggesting to randomness with random variables and outside Turing theory many Turing models are solved by Monte Carlo simulation and the results of Turing theory are often ignored. And how can we make change that? How can we make classical Turing theory more successful? Now the first question is why is simulation so successful? And I see three reasons for this. You see maybe others or more. One for large problems Monte Carlo simulation needs far less computer time. Two simulation is much much easier. Most people have no difficulty doing simulation and three simulation is much more flexible. Now how can Turing theory come to this? Let us look at the different issues, the first issues the execution times. In Monte Carlo simulation execution times start at a high level but they increase linearly with more complexity as measured say by the number of variables you use. In deterministic methods execution times start at a low level but they increase exponentially with more complexity. So Turing theory is an advantage for small models but not for large models. Now to show you this I have here this graph. The red line is the deterministic methods. They start at a low level and go up very steeply and very soon they take more computer time than simulation, the blue line. Our simulation needs a lot of computer time initially and in fact before the advent of computer simulation was not a practical method. However the computer time increases very slowly with a number of values. Now the next thing is mathematics. How can we simplify mathematics? I think we should express numeric methods. Maybe that is a little bit against the stream to say that people want results rather than formulas. Now if you use numeric methods you have a choice. You can use the standard methods or you can use methods tailored to deal with probabilities. Now for standard methods you have lots of software available including software included in simple manipulation languages. However in some cases it is advantageous to exploit the special properties of probabilities mainly that probabilities are also always greater than or equal to zero. And this enables us to avoid subtraction and just reduce rounding errors. Now I should be clear here, subtractions do not cause rounding errors but they increase rounding errors committed earlier, cases quite rapidly often by orders of magnitude. Now if you use special algorithms to deal with probabilities they often have a probabilistic interpretation and that provides additional insights. Now the third thing is can we increase the flexibility? One of the most used simulation paradigms is discrete event simulation. Can we copy this paradigm? Now we can use discrete event systems as they were described by Cassandra and Laportzli in 2008 in a book. And what are discrete event systems? Well the discrete event systems has two main components, the state and events. Now the state is a set of fibers, for instance, q-legs. And all changes occur in discrete steps and discrete event simulation. Now every change is called by an event, because an event can even be defined as something that changes the state. Now q's are clearly discrete event systems, why not use this paradigm during q-legs? Now first let us look at the state. First the state consists of physical state fibers, fibers that you are interested in, it could be q's because we assume that the state fibers are x1, x2, up to xd. And we also assume that the xi are between 0 and ni. In addition to this we need supplementary state fibers, if you have a schedule of future events you need to keep track of the schedule and for this you use a supplementary state fiber or you may record when past events have happened, again that would be supplementary fibers. What you need is enough supplementary values such that the distribution of the future state fibers can be calculated. And you do that and the system becomes a Markov chain consisting of physical and supplementary state fibers. Next we come to the events. Events come in different types such as arrivals, departures, switches of rating lines and so forth. And each event is characterized by three things, an event function, an event condition and the process that generates the events. The first the event function we call it fkx and if the state before event k is x, the state after the event k is fkx. For instance if your state is x1, x2 and you have an event function fkx1, x2 which increases x1 by 1 leaving x2 unchanged and this would be the event function of something like an arrival to line 1. Next we have an event condition and events are prevented from occurring if the condition is not met. And then we have an event process that could be a process, renewal process, a phase start process and hold action. Now here we have an example of a discrete event systems. There are two bins, bin 1 and bin 2. They have finite size, bin 1 the size n1, bin 2 the size n2 and arrivals to bin 1 are lambda 1, arrivals to bin 2 are lambda 2. The bins are used to do repairs and each repair always needs one unit from bin 1 and two units from bin 2 repairs occur at a late new and all these events are possible. Now this system has two state parameters x1, the number in bin 1 and x2, the number in bin 2. And what are the events? We have three events, we finish one, we have two and repair and each event has an event function, an event condition and a rate. So we finish one, we have a path, we see u increase x1 by 1 leaving x2 unchanged and this has the event condition x1 is less than n1 because if the thing is full you don't accept arrivals anymore and the rate is lambda 1. Similarly you have event number 2, it simulates as a rate of lambda 2 and repair has the event function x1 minus 1, x2 minus 2 and this event can only happen if you have enough in the bins, that is x1 must be greater than 0 and x2 must be greater than 1 and if that is 2 then the event happens at a late new. So now you have formulated this event and now you need a transition matrix because the whole thing is a Markov thing. So we take the transition matrix Q and I do it here for any Quasso events because we have to find Qij, the rates of going from state i to state j except for the diagonal which you all know must be determined such that the sum across the row must be 0. Because you take a state out of the state space and you apply all events and for event k you do the following, if the event condition holds, if Ck holds and you have a new state after the event you call it x new which is xk of the present state and the transition again is x, x new which is set to the rate of event k and this you do for all events and for each state. Now this is not really a transition matrix because the x are vectors rather than numbers because we have to associate our vectors with numbers because we have to find for each state a number and this number is used to find both the row and the column of the state and this number must be in a one to one relation with the state. Now if the state space is as simple as this which is true for many events including the for many discrete event systems including the event systems I talked about then you can use the alphanumeric order. This that is the event number is the rank in the alphabetic order the state has and this event number can be found by the following formula. You take the sum of uixi where the ui are the products ntray plus one or tray equals i plus one to d. Okay so now we have events event number and in some cases you can trust enumerate the event numbers and you get a matrix gather quickly. Now things are more complicated if events are generated by renewal process then we must add supplementary variables such as the time since the previous event of the same type or the time to the next event if the event is scheduled. Now the times between events are often continuous random variables and in this case we need to discretize unless we use phase type variables. Now discretization has a disadvantage it forces us to deal with multiple events that occur simultaneously and that increases the number of entries in the transition matrix by considerably. Now if you look at equilibrium solutions we can save computer time by embedding the system at the points where the events do occur. Now once you have a generated your transition matrix you want to find the transcendent equilibrium probabilities that is we want to find the ijt the probabilities to be in state j at country and you probably all know how this is done. We have a recursive formula here that ijt is ijt plus one is a function of iitpij. Now as you all know if t is large enough then the pijt corresponds to an equilibrium probability pij and in order to find the equilibrium probability we have a system of situations which is obtained from this system by just popping the argument t. Now for continuous time of trains we have a similar relationship we have a rate matrix or the increment generator q. Now instead of a recurrence relation we have a dependency equation because pij prime t is the derivative of pij with respect to t and here you have a formula for pij prime t. Again it is large enough that converges to an equilibrium probability which is written down here. And now the next step is to solve these equations both as transient equations and equilibrium equations numerically because this is what I propose and let us do that. Now first however let us look at the properties of the transition matrix. The matrices are huge but sparse. If all events are Poisson processes the matrix size is given by the ranges of the different state drivers and the product of the range is of the different state drivers. Now the density however is low because the entries per row in the transition matrix is equal to the number of events and let the number of events be e. Consequently the density is e divided by n. Now if the events are renewal processes you also have to add the products of the ranges of your supplementary drivers which I call yi and then you get the matrix size of this huge product. Now in each row we actually have two to the e events. It is two to the e because we have to look at all combinations of events that can have an entity. This is an approximation but it is good for m I know too small. Because we have huge matrices. The matrices are also banded. In fact if x1 changes at most by 1 the state number and that represents the band and it changes by u1 which is the product of the ranges of the different state drivers except for x1. It is n1 plus 1. Now here I have an example. I have four state drivers and the ranges of the state drivers are from 0 to 9 and if you have all the elements plus or that gives 10 to the fourth of 10,000 states. However the transition matrix has 10,000 to the square entries which is 100 million. And if you assume five events as it's true for the tandem queue you have only five times 10,000 that is 15,000 non-zero entries. Now the same is true for a lesser to a lesser extent also for men's generated value of u reposes. This suggests that we store only the non-zero entries of the transition matrix. And I apply this idea now in order to find transient solutions of the discrete time mark of time. Thus we use three arrays, one for the row, one for the column and one for the probabilities. Thus we want to apply now this method to calculate pjt plus 1 according to this formula. This is a formula for transient probabilities in this discrete time mark of time. We assume that the total number of non-zero entries we call the dn-top which in some cases in most cases is 10 times the number of events. Thus this is the number of non-zero entries and we want to do the calculation for p going from 1 to t top and here you have the problem to do that. We go from t for 1 to t top for n from 1 to n top. We go to all the non-zero events. And this formula really represents this formula here. The pi nu tray is ijt plus 1, the pi ti is iit and this is the probability ij. Now i is of course what you find in row i, tray is what you find in row n, tray is what you find in column n and the probability is the next. Just to do that for all n. And the remaining statement should be clear because I don't want to go into it. But the important thing is here is that the complexity is proportional to n. This is the important thing. Now we can look at Planck's solution in continuous time. In this case you could solve the differential equations but we suggest another method called randomization or uniformization. It is pretty standard now. In this method you create a matrix p which is q divided by f plus the identity matrix. Where f is chosen such that p is a stochastic matrix. And this we can look at the Markov chain we have described by p. And after n plus 1 steps to the probability to be in state tray is this formula here. But there is now a difference. The steps are not steps of one. They are steps of exponential random bias with rate of f and rate of f. And consequently the pjg the probability to be in the continuous Markov chain in state tray at p is given by this formula which has a pj super scheme n multiplied by the force of distribution. And as I said this is the randomization or uniformization. This method can also be derived by a small modification of the Taylor series. So here we have an effect on mathematics. Now Taylor expansions of matrix exponentials tend to be numerically unstable. And it was argued that no that first hand that this method is potentially unstable. But this is not true and the reason is that we have no subtractions. And if there is no subtractions rounding error think is very slow. So randomization or uniformization as this method is called works for millions of states and it works for very high heat and out problems. And what is even better it performs better than standard algorithm for solving the effect. So far transient solution and now we come to steady state solutions. We discuss only discrete and Markov chains. First we have here the formula for the steady state of a discrete Markov chain. This is a system of equations and we can use Gaussian elimination. Now the mathematicians say Gaussian elimination is unstable if you have something like 10,000 or 100,000 states. But this is not necessarily true because we can reformulate the system such that subtractions are avoided. And we do that when the resulting algorithm has a probabilistic interpretation. Now if you look at linear algebra box then if you solve linear equations and linear equations then you start with the first equation and solve this first equation for x1. We don't do this. We start with the last equation and solve it for pi n. Well why do we do that? Well mathematicians assure us that routing errors are minimized if you start with the smallest n to be first. Just using this mathematic is a mathematical recipe, we eliminate pi n first and then you get the following result. You have Gaussian elimination. Here you have the system of linear equations. We set the tray equal to n, that is that it becomes pi n and we solve for pi n. That's the solution of the last equation for pi n. Then we do the substitutions, that is we first split the sum into the sum from pi equals 1 to n minus 1. Then we have here the last term which only includes pi n. We plug in this value here and we simplify and what you have is a new system of equations which has n minus 1 linear equations covered in n and the coefficients are now given by pi j plus pi n pi nj divided by 1 minus pi nn. Now you can continue. First we can call these new coefficients pi j n minus 1 which is pi j plus pi n pi nj divided by pi 1 minus pi nn. This gives us a system of equations which we could solve for pi n minus 1. Then we can get a new system of equations which is sorted for pi n minus 2 and so forth and so forth. The coefficients after we have solved all the pi's from n up to a capital N from small n up to capital N are given by the following formula here. Pi ij super skipped n minus 1 is pi ij n plus pi i n super skipped n pi nj super skipped n divided by 1 minus pi nn super skipped n. In order to obtain this formula we also need pi n which is this equation. It expresses pi n in terms of the pi i where the i is less than n. This can be used in the black substitution phase for calculating the pi n. Now I told you there is a probabilistic interpretation of the whole thing and to discuss it we first need to use embedded. Now what is embedded? Suppose you have a sequence x1, x2, x3 and so forth which would be a Markov chain and in addition we have a set c. And now we create a subsequence of this sequence here and elements xb of the original sequence are included in the subsequence only if they are elements of c. Because if the set c is 1, 2 and 3 and if xb is 1, 3, 5, 2, 4, 2 then what happens? Well 1 is in c so we keep it, 3 is in c so we keep it, 5 is not in c so we omit it, 2 is in c we keep it, 4 is not in c we omit it and 2 is in c. Now we can actually embed into embedded Markov chains. First we can use this new sequence we call it x superskip c and we can embed it into the set d where d is say 1 or 2 and so we get 1, 2, 2. Now if you look at this you will see that instead of embedding d into this sequence into the sequence x superskip c you could have embedded d also in the original sequence and we would have the same result. First what happens is if you have repeated embedding then you could as well do embedding in one step or vice versa in order to do embedding in for many, many different states you can break it down into one step embedding sets. Because embedding can be done in either in several small steps or equivalently in one large steps. Because we can reduce the state space by one state at a time until only one state is left and this I call state data. Now if you have a Markov chain and if you do embedding by set c then the corresponding embedded chain is also a Markov chain. The problem is now to find the transition probabilities pij superskip c and what is the solution? The solution is to add all the probabilities of all the paths that start in i ending j and avoid any state c in between. Now what happens is that elimination Gaussian elimination is really embedded. First we start with the original sequence states 1, 2, 3, we embed it into a set c which goes from 1, 2 up to n minus 1 and in order to find c we add the probabilities of all the possible sequences to go from i to j avoiding any state less than n. We can go from i to j either indirectly that's pij we can go from i to n and from n to j that's pij, pij, we can go from i to n, we can stay in n, go to j and the probability of this path is pij, pij, pij and so on. Now you go on, for this sequence here you find the probability pij, pij and n square pn, pij. Now we have to add all these probabilities together and we get these probabilities for the embedding set 1 to n minus 1 which is equal to pij, pij, pn, n to the k, you stay k times in state n, pnj and as you see this is a geometric series and the sum of the geometric series of these terms is 1 minus pnn, first what we get is pij plus pij divided by 1 minus pnn and this is nothing else but pij superskipped n minus 1 and the rest follows the same holds now for all pij superskipped n. To state it out you reduce the state space by 1 into 2 step and for the general term you get pij n minus 1 is pijn, pij and n to the n to the n to the n minus pnn and this is the transition matrix for the Markov chain embedded in the set of 1 to n minus 1. Now here is a subtraction and I said we should avoid subtractions and this can be done because all the pnj here are transition probabilities of Markov chains that is the sum across the j is equal to 1 or alternatively if you look at state n, 1 minus pnn superskipped n is the sum j equals 1 to n minus 1 pnj n such that the sum here is 1 if you move that over and it follows that this formula can be written without using subtractions and you get the 3th method the method of TACS-Altas-Mantak-Sahay. Now this method though has a problem it is impossible to exploit specific well it is possible but it is very difficult. Now if the matrix is tense the number of floating point operations goes up with the 3rd power of n. In the example I presented before we had n10 to the 4th state that is 10,000 because this means according to the formula here that we get 2 3rds 10 to the 12th flops and if you do that or if you do these many flops on a laptop as I have here as you have you have then that means around 100,000 seconds compute the time that is one day on the laptop. First this is not a good thing really. Now you may say well we cannot exploit bandedness well what do you get? You get as you can divide by the bandwidth which is denominator here but still you get an n to the 3rd in the numerator. First the algorithm are always n to the 3rd. First in the example you would reduce the time from one day to 100 so but they which is still you don't like to wait too long until you get this one. What is even worse is if you look at transient solutions they increase this n rather than with n to the 3rd. Those transient solutions in a way are easier to obtain than steady state solutions. Now this is kind of strange but for large-stance matrices this is good. In fact in order to find equilibrium solutions you can just iterate the transient solutions until you have reached the equilibrium. Now you can increase convergence a little bit by different tips such as using the trumpets and so forth. You can also go back to classical methods such as Gauss Seidel I use that extensively and the 3rd by 12. Now in Gauss Seidel the order of the states is important and as has been shown by Mitter and Zuckers if you order the states such that the events downstream changing state by the downstream are done first then the convergence is faster. First you look at the flow. Again you have to look at the meaning of the model and by exploiting the properties of the model you can improve your methods. Now state reduction is still great for some theoretical results and I have here two theoretical results that mention the 3rd one. First it allows you to deal with infinite state spaces. Secondly it allows you to deal with matrices that have a discrete cluster and third they can allow you to do good embedding techniques which I don't have the time to discuss. Now first infinite state spaces. Here there is the following result. If J is recurrent and the state space can be cut in such a way that the Pij superskips the N. The only thing that you need to calculate the Pij N does not change by more than epsilon as long as xd plus 1 minus xp the up comes about it. And here is the proof. If J is still recurrent most parts from i to j ending as soon as j is reached have a finite length. Because we now only look at most parts and most parts as we can say these are the parts less than u and if you look only at those parts less than u then the number of states reachable between two visits of J is finite and the states that cannot be reached on such a path can be ignored and we can cut them without changing Pij superskips N by more than epsilon. First in other words you can always cut the state space to a finite size if the Markov chain is recurrent and if the up comes about it. Now the next thing is let's look at transition matrices with repeating columns. Now here I have a very simple proof but you find more complex proofs in the literature. If Pij is equal to Pij j plus k for almost all ij which is true for all non methods then we have Pij N equals Pij j plus k j plus n plus k and the proof is simple. If you have any path from i to j superskips N you can match that with a path from i plus k j plus k n plus k by just shifting the path upwards by k units and when doing this you have you meet the same transition over the next. Consequently this is our next. Now this essentially finishes my talk and here I have some conclusions to go back to the beginning so to say. You make a previous conclusions you may easily what I think these things should be discussed. During this flexibility I propose this big event approaches to reduce computer time. You can try to reduce the number of state values and to make mathematics easier use your method possibly modified to do that for a previous case. Okay that is the all here we have number of references which I found useful and of course my own papers I will use at least for you. Okay any question? Pij Jank please. Yes I am here. Yes I am here. I am hearing you. So we are afraid it is very nice very interesting talk. So when we do numerical and also the Markov for Mendecavo what is your comments on a statistical approach in the future for queuing models? You see I mean what is my suggestion for the future of queuing theory is that the question? Nowadays say the advances of this telecommunications smartphones and all this real-time apps and a lot of times we do not have really have models and we have input we have something and then we have output but in order to do the analysis or to design the system in an optimum way of course sometimes based on the knowledge or other same information we may assume here is the model but even in that case very often a lot of information are still incomplete for example parameters could be missing and for example if you want to say try to show this remaining but you do not really have that information upon arrival maybe information is delayed or information is missing so therefore we need to deal with that one. I see this yes okay that is a talk yesterday I think that is something that has to be done in general you see in order to make the things applicable during the year we should also look at real problems and actual problems. This is not something I did well I did it in the past and there is also they used to be quite long for long time methods which was a practical problem in airline maintenance but I haven't looked at this at this moment but yes this is very important that you look at the statistical method that you do that type of thing that you look at actually during problems and so forth and what I look at is easy methods that once you have formulated such a system how can you solve it without going in a lot of mathematics so this would be the second part kind of a problem and the first part which you alluded to the statistics and so forth that would be the first part. Thank you Winfried oh by the way I have some I have few words but not related to Winfried that today's talk and most of you know Winfried actually when I was doing my PhD Winfried was my PhD supervisor and actually my career greatly impacted by the supervision of Winfried and also the directions I have been doing also greatly influenced impacted by Winfried work for example the state of reduction method that the the GTH method randomization and factorization or other things so I really appreciate I like to use this opportunity to thank Winfried once again thank you Winfried oh thank you for this nice comment. Hello hi yes hi Winfried very nice insight for talk thank you very much my question is about the accuracy of simulation method and related the issue is is it possible to use or when you use simulation how do you identify system singularity and do you have comments on those things. Okay yes well okay well first of all simulation since the error increases with the square root of the number experiments simulation in order to be very accurate in spite a lot of runs but this is with the computer speeds that you have now this is no great problem however there are cases the simulation is doing poorly and that is exactly if you have rare events or I don't remember you mentioned the simulation in the event is poor because the error is kind of well it's an absolute error and if you estimate small probabilities you really need very very many experiments because this is yes this is a problem it's a major problem and there is a group to address exactly this point. Okay thank you. Hello my run Hlemka here you were talking about these huge state spaces sometimes I have an issue that I don't even know how to order the states is that an issue that you come up with. That is an issue yes and if you have ideas how to do that I would like to hear about it. You see I mean there's two orders that I tried and one order is the alphanomeric order which I presented and the alternative is to order as you create the states as you start the state run and see which events bring you to the different states you use these new states again to find further states and so forth. I initially thought that this is a poor method because it takes long to generate all states but this is not true you can find the states in linear time. Thank you.
Monte Carlo simulation is one of the most successful techniques, not only in operations research and performance evaluation, but in science in general. One reason for this extraordinary success is its flexibility. In contrast, most queueing models are rather specialized. In this talk, we suggest methods to make queueing theory more flexible. In particular, we suggest an event-based approach, which provides great flexibility for the modeller. We also show how to convert such event- based models into Markov chains, which can then be solved by classical numerical methods. The suggested method is particularly suited for small models, where its execution times are much lower than Monte-Carlo simulation. For larger problems, the curse of dimensionality takes over, and the execution times based on classical numerical methods increase exponentially. This means that for complex models, simulation finds numerical solutions with less computer time than classical numerical methods.
10.5446/54065 (DOI)
Yeah, okay. Can you hear me okay? Or no? Barbara, you should put it in the whole screen. I mean, full screen. Yeah, yeah, yeah. No, I was worried about the audio. Can you hear me? All right. Yeah. All right. Yeah, that's good. Okay. Now, just before you start, Barbara, you are the chair for the next, for the next session. Okay. Thank you. All right. So I'm going to talk to you this morning about catastrophes and queuing systems with time varying periodic rates. And this is joint work with Sharif Amar from the Nokia University in Egypt, and he is with us today. So it's really good to be here in Banff. And I wanted to thank the ban, the ban staff and SN and also our organizers. So here's Java. And there's Yichang. And before I give you an overview of the talk, I'd like to tell you a very short story. So this is a screenshot of a website called the Nocturnists. And it's a collection of podcasts by medical professionals. Most of them are doctors, but the one that I'm going to tell you about today is by an AOC. And in this context, AOC means administrator on call. And she's a nurse. So she was an intern in the capacity of administrator on call. And during her second day. At 2 30 in the morning, she got a phone call from the hospital. And they woke her up and she said, what's the emergency? And they said. The refrigerator doesn't work. And she's thinking, okay, I'm going to have to deal with this crisis. What does that entail? What, what happens when the refrigerator in the morgue doesn't work? Are we concerned about the body's beginning to smell? And they said, well, no. The refrigerator is in the morgue. So that's a story from the before time. And the refrigerator is in the morgue. So she's thinking, okay, I'm going to have to deal with this crisis. What does that entail? And she's thinking, okay, I'm going to have to deal with this crisis. And she's thinking, okay, well, no. They start to leak fluid. So this is my example of the catastrophe. And in the before time. Deaths within a hospital system. Would exhibit a periodic. A trend that was not uniform over the course of the day or over the course of the week. So that's a story from the before time. So that's a story from the before time. And that's a story from the before time. So here's an outline. I'm going to define for you what I call the key equations, which are just the generating function equations. In the case where we have periodic processes. So first, just for a general quasi birth death process. And then for a quasi birth death process, but for also having catastrophes. And then the second one is one where level transitions. Are possible. To adjacent levels or within the existing level. And that's it. And that's where it gets the birth death name. You can have a single birth or a single death. So you can go up a level or down a level. But we also allow catastrophes where we go to an unnumbered level. In which repairs occur. Also in the background, I'm going to talk about the exponential generating function for the second time. So you can see that this has nothing to do with cues. But the method that we used to analyze it, at least that I'm going to go through. Does have to do with the method that we're going to use to analyze the. Asymptotic periodic behavior of these queuing systems. Then next I'm going to show you not for quasi birth death processes, but just for a simple scalar. Birth death process, the single server cue, both without catastrophes and with catastrophes. And then we're going to look at an example of a QBD. Where the phase processes are laying arrivals with. Without catastrophes and then with catastrophes. In each of these cases, we're going to be getting the exact solution for the level distribution. For the asymptotic periodic distribution. And by that, I mean that. For these ergodic queuing systems. And then we're going to look at the same. Where rates transition rates are periodic. There will be. An analog of the steady state distribution when rates are constant. That's the same for each time within the period. So that's what we're looking for. And we can get those exact distributions for these particular exam. So. We study the asymptotic periodic distribution of cues with time to fix the phase processes that occur randomly according to an exponential distribution. When a disaster occurs the system will reset all customers are lost. And an exponentially distributed time will go by while a repair is. Made. to derive the generating function equation for QBD without catastrophes, then with catastrophes, and then we're going to look at how these might be analyzed by examining the example of an exponential generating function, the exponential generating function for the Bernoulli numbers. So QBDs, we've got a two-dimensional state space, X is the level of the process, and J is the phase of the process. In this talk, we're just going to be looking at QBDs, that is, birth-death processes, except that catastrophes are also allowed. When the catastrophe occurs, we go to an unnumbered state, which is the repair state, and the processes are going to have periodic transition rates with period one. So here's the infinitesimal generator for a QBD. The Bs and A's here are block matrices. They're functions of time within the period. And below we have a system of differential equations. These guys are vectors with one component for each phase that the process can occupy. So here's the key equation. This is simply the result of we multiply this equation by Z to the n. We sum them all up. We get a differential equation for the generating function. We use the technique from the first week of differential equations, method of integrating factors, and we can solve this up to an integral equation. And then in the case where it's periodic, we assume that at time t minus 1, PZt is equal to PZt minus 1. And so we get this factor here. And it's the zeros of this, the determinant of this matrix that govern the asymptotic behavior of the system. So we can figure out the zeros of that. That will be the poles of the generating function. If we know the poles of the generating function, we can get an asymptotic formula for the level distribution. And in the examples that we're looking at today, we can actually get an exact formula. So moving from the case without catastrophes to the case with catastrophes, the stuff highlighted in green is all it's different. Eta is my repair rate, and that first row represents my state when the system is broken down. A catastrophe has occurred, and we're repairing it at rate eta. Gamma is the rate at which the catastrophes occur, and it's the same for any level within the process. And for this column, this is just a QBD. And so that's the basic structure there. And so the key equation for the generating function is very similar, and I've highlighted in green what's different. These guys are also different. They're multiplied by this exponential. It gives the rate at which the catastrophes occur in the exponent. So in the system of differential equations, it's basically the same, except we've got this unnumbered repair state, and then we've got the possibility of having a catastrophe occur. So now we're going to step back, and we're going to look at Bernoulli, the exponential generating function for the Bernoulli numbers. And so this is just like an ordinary differential, ordinary generating function, rather, except that we're dividing by n factorial. So if we've got a sequence a, n, the exponential generating function is just the sum n equals 0 to infinity of a, n times z, n over n factorial. So this EGF, or exponential generating function for the Bernoulli numbers, is this. It's going to have poles wherever e to the z here is equal to 1, and of course that occurs at 2 pi ik for any integer k. And so we can get a formula for the Bernoulli numbers from this generating function by reading off the coefficients on z to the n over n factorial of this generating function. And we can get those coefficients by using these poles. So this thing I didn't bother to put in a box is a theorem that we're going to be using for analyzing our queuing systems. This is a simplified version of a result from Flajulet and Sedgwick. I'm just looking at generating functions where there are only poles of multiplicity of 1. And in that case, the coefficient on z to the n is just going to be given by this formula where there's some error, this magnitude, where that capital R is the contour in the complex plane if we use the Cauchy integral formula to get the coefficients of the generating function. So we're going to use that approach to analyze the Bernoulli numbers. So basically what we do is we take R and we encircle some of the poles, and we just let R increase until we've got the entire complex plane. And because the Cauchy kernel is z to the minus n minus 1, that's going to decrease as an inverse power of capital R. And so that will converge. So usually I would have a picture of Bernoulli here. This fellow is, he wrote the piece of MATLAB code that I've adapted to help with the visualization of the complex numbers. So this is the generating function, this is a graph of the generating function for the Bernoulli numbers. The colors here represent, if I express z as R times EI theta, the colors represent theta, different values of the angle, and the kind of quilting represents the modulus. The black circle is a unit circle, which is irrelevant for the Bernoulli generating function, but it matters for the probability generating functions. So what we're seeing here, if I can get this to pause, there we go, what we're seeing here is where all the colors come together, that's a pole. And so I'm pushing my contour further and further out so it incorporates more of these poles, and eventually I get the entire plane, and what that corresponds to is an infinite sum I'll show you in a minute. So here's the Bernoulli generating function again, the EGF. This means, give me the coefficient on z to the n. So since this is an exponential generating function, the coefficient on z to the n will be the Bernoulli number divided by n factorial. Remember that this z over ez minus one is asymptotic to one of these roots. Oh, that's going to cause me trouble. Okay, give me a second here. Oh, that is very symbolic. Oh, okay. Um, uh... Okay. Sorry about that. All right, so rather than try and draw on that so that I don't get the thing changing sizes on me, remember that this is asymptotic here to this expression. So if I divide that through by chi k numerator and denominator, I'm going to have 1 over z over chi k minus 1 and you'll recognize that as just being geometric. And so we can express it as the coefficient on z to the n of the term corresponding to 1 over 1 minus z over chi k is just being this thing. And if we sum over all of those residues, we're going to get that formula. And this is just rewriting it noting that we're excluding zero from our sum. We factor out the common term of 2 pi i to the n. This picture would correspond to something from up to just up to 2 because I've got the residues from minus 2 to plus 2 showing there. And so this gives me a formula for e to the n over n factorial. It's just this guy when n is even. And so I can solve for the nth Bernoulli number. And if I truncate my infinite sum, it's just going to behave like a constant times a zeta function, my error, the scale of it. So that's my basic approach. So why am I talking about the Bernoulli numbers in the midst of a queuing talk? Well, let's now let's look at the single server queue. So here's my key equation there. And this term comes from the periodicity. Remember, we're assuming that PZ of t is equal to PZ of t minus 1. And so when we solve our equation for PZ of t, we get this factor. And so we treat it just like the denominator of the Bernoulli number exponential generating function. And we're going to have poles there where that's equal to 0. So that happens when that exponent is equal to 2 pi ik. And those solutions are given here. And you'll notice I don't have the minus one here. That's because we're working with a probability generating function. And when this solution, when we pick the negative of the pair when you're solving a quadratic equation, right, this is quadratic because I got if I multiply through by Z, this will be z squared over here. And so I've got a quadratic equation. So it's going to have two solutions. I don't want the negative solution because that's inside the unit circle. And if those were not removable poles, removable singularities, then that would mean it's not an ergodic stochastic process because it wouldn't converge inside the unit circle. So all those guys are, they don't count, basically. So here's my picture of the generating function for the single server queue with this particular periodic rates for lambda and mu. We are inside the unit circle. The unit circle is black. And you'll see it as we zoom out. All these pretty things, these are zeros of the generating function. So what I'm, I'm sorry, poles of the generating function. What I'm graphing here is the denominator. If I had the numerator as well, then the zeros in the numerator would cancel these out. So we're zooming out here on the right. We've got an approximation. The red is the ordinary differential equation solution. And the blue is our asymptotic solution. So let me see if I can stop this. So this is the approximation when I sum from minus one to one. So this is the singularity corresponding to minus one to zero and to one. But it's not very good. But let's go out a little bit further. And so those are in sync. And test my coordination here. So we really don't have to go out very far before the approximation is excellent. And it is exact if we were to take the infinite sum of all those terms. And here's what the exact solution looks like. So I just, here's my, to get the probability of being in level L, I just take this general formula and multiply it by the root to the minus L power and sum over all those for all, all the different roots K. But I can truncate it at a fairly short time and get a very excellent approximation. Okay, so now we're shifting to catastrophes. This is the key equation for catastrophes. It's barely different than for the MM1Q. I just got, I got this term. And then in here, and in here, I've got a gamma factor. And then there's this repair state. But basically it's the same. And so my roots are these guys. There should be a minus one here. But again, that's inside the unit circle. So I ignore that because it's also a zero of the denominator. Same deal. So the blue is the approximation when I just take a single term. And as I pan further out, I'm going to be taking more and more terms and you'll see the singularities appear in a column here on the right. And down here at the bottom, these are the rival rate, the departure rate, the catastrophe rate, and the repair rate. Okay, so that's the now now we're going to look at the QBD, not just scalar. So here's the key equation specified to the airline arrivals. And I haven't told you what these matrices H and H inverse are. They don't depend on time. So they're constant, they do depend on Z, but they don't depend on time. That matrix that I just showed you as H and this diagram H inverse, the component that's M off of the diagonal is given by this formula here. And so you can see where we're going to get our poles from. And so that yields this formula. Notice that the columns of this matrix, this column times chi K, chi to the one over K will give me the next column. And this one times chi to the one over K will give me the next column. So I can write this formula very compactly like this. And I should use a different letter than Q. So let me just call that DG because it's not the repair rate, it's just some function. So you can get this very simple formula for the probability of being in level and phase J time T within the period. And here's some examples. This is with, this is level one phase zero when I've got six possible phases for my Erlang distribution. The blue is a single term of that asymptotic approximation and the red is the ordinary differential equation result. And so, you know, you really can't tell the difference when I've got three terms minus one, zero, zero. This is level one phase five. And just take a few terms. If I take my rates and I multiply them by 10, it takes a little bit longer to converge. But it still does it. And here's phase five, phase five takes a little longer to converge than phase one. But there you go. So in the case where we've got the phase distribution, the QBD distribution and we've got catastrophes, we just make these couple of adjustments. And this is adjusted to with the gamma in the exponent. And so here's the asymptotic approximation with just a single term and we would just sum over all of the poles chi m. So I'm going to show you some references and then give you a summary statement. So these guys are about catastrophes. This is the theorem in the more general case, not in this specific example of for meromorphic generating functions for approximating the coefficient on z to the n. Here are some papers that are related to finding these solutions when you're not dealing with catastrophes. This is a nice paper by Tkach about Erlang arrivals in the case of constant rates. And then there's Wilf's book on generating functions. So just to remind you about what the talks about, we're using these singularities of the generating function to figure out what's going on. Here's a photo from when CanQ was in Banff back in 2011. And the summary is we can find the exact solutions to the asymptotic periodic distribution for some queuing systems using singularity analysis. So thank you for your attention. Thank you Barbara. Very good.
We study the asymptotic periodic distribution of queues with time- varying periodic transition rates and catastrophes that occur randomly according to an exponential distribution with time-varying periodic rate. When a disaster occurs, the system resets, all customers are lost and an exponentially distributed period of time elapses before the repair is complete. Service is governed by a phase distribution. The asymptotic periodic distribution of the queue process is analogous to the steady state distribution for a system with constant transition rates.
10.5446/54067 (DOI)
I'm sorry, I have to interrupt. So every last speaker of a session would be the chair of the following session. So if I am the last speaker of the first session, then I would be the chair of the next session. For example, in our case, Barbara would be the chair of 1130 sessions. So with this way, we go ahead and we don't have to assign any new chair or something like that. So anybody in that session would be the chair of the next session, the following session. Thank you very much. Okay, sorry, you go ahead. Hello, everyone, can you hear me? Yes, okay, now I'm going to share my screen. So can you see my screen now? Yes, yes. Okay, thank you. Okay. Hello, everyone. My name is Haran. I'm currently a PhD candidate from the University of Waterloo. And today I'm going to give a talk about the double-send cues with market Markovian arrival process and abandonment. This is joint work with my supervisor, Dr. Chi-Min He, and this work has recently been published in stochastic models. Here is the agenda of my presentation. Since the methodology we use to solve this QE model is the multi-layer MMF process. So I'm going to briefly discuss this methodology before we discuss our QE model. So let's start. First, let me introduce the background of this program. The double-send cue is a QE model that each side demands service from the other side. This model was first proposed by Kendall in 1951 as a passenger taxi service system where passengers come to a taxi station to take taxes. So let me use this diagram to illustrate the idea of this program. So in this diagram, we use circles to represent the passengers and the squares to represent taxes. And the different colors mean different types. So as you can see, we have three different scenarios, empty system, passengers queue, and a taxi queue. So when there is a passenger, no taxis, it is an empty system. Now from this empty system, if the passengers arrive first, we will have a passenger queue. When there is a passenger queue, once there is a taxi arrived, we will have a matched pair leaving the system immediately. Otherwise, those passengers can leave the system without matching after a period of time. Once all the passengers get matched or a bound to queue without matching, the system comes back to the empty system. Similarly, if the taxis arrive first, we will have a taxis queue and we can have a matched pair leaving the system once there is a passenger arrived. And from this graph, you can see those two queues, passenger queue and the taxis queue, can never coexist in the system. And this is a very important property of this system. And there are a lot of interesting literature about this double sided queueing system. In 1960s, Dobby and Givens started the queue length of a double sided queueing system, but they didn't consider the abandonment and they assumed the personal arrival process. And then in 2002, colonies started a double sided queueing system with abandonment. And actually, those double sided queueing system with abandonment has been applied in many other stock asset systems like a perishable inventory system, financial markets and organ transplantation. More recently, Adelaide Waze started a multi-type customer double sided queueing system, but they considered more general matching rules such as whether or not two sides can be matched depends on their types. In terms of methodology, Meshis and Anaconda method has also been applied in double sided queueing system. For example, in 2000, Takahashi started a double sided queueing system with a financial space and they assumed the arrival process follows this Markovian arrival process, but they didn't consider abandonment. And then in this year, Nero started a similar system, but they considered this experiential in patient times. Our system is our model is similar to their model, but we consider this market Markovian arrival process. So we have multiple type of customer and we assume discrete in patient type, which is more general than the experiential in patient type. So with little I will give a formal definition of our model. But before that, let's first take a look at the methodology, multi-layer MMFF process. So MMFF process is a very useful tool and it has been applied to many real life systems like the dam control and risk models. And of course, in queueing models and in dam control, the water level can be modeled as an MMFF process and in risk model, the supply of an insurance company can be modeled as an MMFF process. Then in queueing model, normally we will track the H or the workload of the system. In our work, we will define an H process. So what is MMFF process? Here is the basic definition of an F process. An MMFF process is a two dimensional stochastic process. As you can see from the diagram, one of them is the fruit level and the fruit level can go up and down. And the change of the fruit level is driven by the underlying Markov chain, 30. And the states of this underlying Markov chain can be participated into three subsets, S plus, S minus and S zero. When the underlying state is in S plus, the fruit level can go up. When the underlying state is in S minus, then the fruit level can go down. And of course, when the underlying state is in S zero, then the fruit level can remain the same for a period of time. Now, multi-layer MMFF process through the fruit process in which the change rate of the fruit level is modulated by layer dependent continuous time. So for example, in this sample path, we assume there are three different layers, and in each layer, it has its own underlying Markov chain and a corresponding change rate. Then what we want to do is we want to put them together. So we need to define the transition on those borders. For example, here we have L1 equals to 0 and L2 equals to 3. So we need to know if the fruit level equals to its underborder, we want to know how long the fruit level will underborder and how they will live in the border, whether they are going to live in by increase or live in by decrease. Another transition probably we want to define is when the fruit level approaching the border either from below or from above, we want to know the probability of the fruit level passing or reflecting or entering the border. So in total, we have six cases, like up closing, down closing, up reflecting, down reflecting, up entering, down entering. So those six cases are expressed by those six matrices. So now we have a well defined multi-layer MMFF process. What we want to find out is the joint stationary density function. Here is the definition. Since the fruit level can stay underborders, so we have probability mass function if the fruit level equals to those borders. And for all other fruit level, we have distribution function and density function. Here we put the mass function and the density function into a vector form, here the border p and the pi. So now we can have the, here is the result. Under certain conditions, the final result can be expressed in a matrix, an exponential form, and the main part here is here, we have been highlighted in red. And the remaining, like this C plus, C minus, and gamma is used to determine the underlying states and their changing rates. So, but if the MMFF, the changing rate of the MMFF process can only be positive one or negative one, then this C plus and C minus become identity matrix. And the gamma can be removed. Therefore, all we need actually is the red part. And that's exactly what we need for our QE model, because in QE model, the changing rate can only be positive one and the negative one. So later you will see in our QE model, our final result is very similar to those red parts. So now we can go to our QE model. Here is the formal definition of the QE model. First, we assume that passengers arrive to the QE system according to a market Markovian arrival process denoted by D0 to Dk. So we assume they are capital K different types of passengers. And we can find out the arrival rate of each type of passengers, lambda k, and the total arrival rate, lambda. And of course, we also need to define the inpatient time distribution for each type of passengers like this. Then similarly, we will define the arrival rate, arrival process of taxes. This is another market Markovian arrival process denoted by D0 to Dk. Now, which means we assume they are capital H different type of taxes. Now we can find out the average arrival rate for each type of taxes and the total arrival rate, mu H and mu. And of course, we need to define the inpatient time distribution for each type of taxes. Now we want to ensure the stability of this QE system, which can be expressed by these two formulas. So at KM means the proportion of type K passengers with infinite inpatient time. Namda K is the arrival rate of type K passengers. Then put them together is the arrival rate of type K passengers with infinite inpatient time. Then we do summation from K to from zero to capital K, we get the total arrival rate of passengers with infinite inpatient time. Of course, this rate should less than the total arrival rate of taxes. Otherwise, the QLens of passengers will go to infinity. Similarly, we want to ensure the total rate of taxes with infinity inpatient time less than the total arrival rate of passengers. Otherwise, the QLens of taxes will go to infinity. The next we will define the age process. Since we consider two sides of customer, so we can have for the age process for each side. But as we mentioned, those two Q's can never coexist in the system. So the age process and never be both positive at the same time. So we flipped one of the age process upside down. In our case, we flipped the age of taxes upside down. Then we will have to, we define the 80 as the age of the system. Now we have the age process. So we need to know the underlying states. We need to have three, the underlying state consists of three parts. First, we use ST to track the type of the passengers or taxes as a head of the queue. Now, since we are using Markovian arrival process, we need to know the underlying states of those two arrival process. We use IAT to track the underlying state of the passengers arrival process and IBT to track the underlying state of the taxa arrival process. Now we put in those four parts together, we get a continuous type Markov process, and we define it as the age process. But still, this age process is not an MMFF process because we have jumps. So what we do is we add fictitious periods to replace those jumps. So for time jump, we replace them with linearly decreasing periods. For the after jumps, we replace them with linearly increasing periods. Now we will have a well defined MMFF process. And this MMFF process is special because the changing rate can only be one and the negative one. So now we can have the joint stationary density function for this MMFF process. And here we do sensory. We censor out those fictitious periods we just added. Then we come back to the age process, but still we have this joint density function for the age process. Here is the result. As you can see now we denote as FX. So the joint density function for the age process is very similar to what we got before for the MMFF process. So now we need to do normalization because we do the sensory. Then we need to normalize those constant vector u plus, u minus, and p. So now we have the joint density function for the age process. Now we can have all other queuing quantities we are looking for. Like for example here, I give you one example for the matching rate. We define the match as the number of pairs of matched passengers and taxes per unit time. In this expression, the first summation, this part actually is the probability that there is a taxes queue. Then the D minus B zero actually is a rival process of passengers. The match actually is a rival rate of passengers. Then put them together is the matching rate of matching rate when there is a taxes queue. Similarly, the second summation is the probability when there is a passenger queue. So this part actually is the matching rate when there is a passenger queue. When we put them together, we get the total matching rate. Then if we want to know the matching rate for specific type of passengers and the taxes, all we need to do is using the underlying states. So here we use this EH. So then this probability becomes the probability when there is a tax H taxes at the head of the queue. So DK means the arrival rate of a type K passengers, then put them together is the matching rate of type K passengers and the type H taxes. Similarly, this EK put together is which means the there is a type K passengers as a head of the queue. And the BH means the arrival rate of type H taxes. Then we add all of them together get the total matching rate of type K passengers and type H taxes. And here is the summary of all the queueing quantities we can find in our paper. Like for example, we can find the served the probability and the and the abandonment probability, the waiting time of served inputs, waiting time of abandonment inputs and the mean waiting time and the queue lenses. Now, of course, we can find all those queuing, queuing quantities for individual types. Next, let me give you a simple example. In this example, we assume there are three different types of passengers and they for they are a real process for a market and market and market process. And they are different in patient type distribution for different types of passengers. And then for the taxes, we assume the follow up person distribution, but this person distribution can can be decomposed into two independent kind of arrival process. And actually this in together. This is a special case of a market and market arrival process. And of course, we assume they have four different in patient type distribution. And then this is the result. For example, for the matching rates, this 0.638 means in one unit time, they are 0.638 matched pair of type one passengers and the type one taxes. And of course, we can find all other queuing quantities for each type of passengers and the taxes like the serve the probability loss probability at the head of skill loss probability before reaching the head of skill waiting times and queue lenses. And of course, we can also find the density function of the waiting time of serve the passengers and the taxes. And from the graph, as you can see, they are quite different because they have different arrival rates and in patient type distribution. Last, let me summarize our contribution of this paper. First, we introduced a very general double set of queuing model with a multi types of customers and the environment. Next, we construct this multi layer MMF process to analyze this queue model and we can find a number of queuing quantity and the app is quite fast. And currently some interest model are under investigation. The second is the double set of queues with batch mark on the arrival process and abandonment. So this model is similar to what we present here. But the difference is we consider the B map. So with this B map, we can have multiple units, arrivals at the same time. So for example, if we can have six passengers around the same time, but the capacity of the taxi is four, then we will have partially matched cases. Because out of these six, only four passengers can be matched at this time and the remaining two will stay in the system for further matching. Another queue system we are investigating now is the standard queuing model with multiple type of passengers. The arrival process follows the same marked mark of the arrival process and the service time follows this type distribution. And of course, the abandonment type, inpatient type are different and it's a general distribution. That's all my presentation. Thank you very much. Any question. Any question. I have a comment.
In this paper, we study a double-sided queueing model with marked Markovian arrival processes and finite discrete abandonment times. We apply the theory of multi-layer Markov modulated fluid flow (MMFF) processes to analyze the queueing model. First, we define three age processes for the queueing system and convert them into a multi-layer MMFF process. Then we analyze the multi- layer MMFF process to find queueing performance measures related to the age processes, matching rates/probabilities, waiting times, and queue lengths for both sides of the queueing system. We obtain a number of aggregate quantities as well as quantities for individual types of inputs, which can be useful for the analysis and design of, for examples, passenger-taxi service systems and organ transplantation systems.
10.5446/54069 (DOI)
translucent wear is chosen by the average. Ia diodau arall y casos am Lundt, o greu ychwaneg, aer am lywodraeth i chi. Dus yn Ynswyr y diem sometime Mt a dwi'n meddwl i'r cyfnodd. Mae'r cyfnodd yn cyfnodd CSA, Angola a AWS. Felly ydych chi'n fawr i'n fawr. Dwi'n meddwl i'n meddwl i'n meddwl i'r cyfnodd. Felly, ydych chi'n gweithio'n cyfnodd ychydig i'r cyfnodd yma, yn y gweithio'r cyfnodd yma, yn y company yn Berlin, a'n ymwneud yng Nghymru, a'n ymwneud yng nghymru a'n ymwneud yma, a'n ymwneud yma, yn Yn Ymwneud, a'n ymwneud ymwneud, ac yn ymwneud ymwneud ymwneud ymwneud ymwneud ymwneud. Felly, yw'n meddwl i'n meddwl i'r cyfnodd ymwneud, ac yn ymwneud ymwneud ymwneud ymwneud ymwneud ymwneud. Mae Twitter ac mae Mail. Felly, rwy'n gweithio'n meddwl i'n meddwl i'r cyfnodd ymwneud ymwneud, ymwneud ymwneud ymwneud, ymwneud ymwneud, ac ymwneud ymwneud ymwneud ymwneud ymwneud ymwneud. Mae ein bod y pen fold yn ynallu mi allan i Allan i winningio'r G đangag diff leso anhooannu, rydw i'n cael pan friendiau nhon nesaf i ddiweddar hwnnw. Pvertedoio ei fath o go tanps, fel yw iddyn nhad nesaf â hwn, nelaast nyfawr arllTeig fundio three downhill a bethau'i byddai arall Adnod am 13 ears yma, a byddwn ar holl eich zaethau team ac roedden na wnaethabiladau rhaead am y prysgol. The company had a simple but not easy to explain business model. We provide visual content for companies at scale. What does it mean? We see orders for our customers. Usually a customer can input the orders or it can import these orders in our system. Rhaelon o'r dd crumble o'r bodd cael ddwyrainio. Mae'r cymh Gridr hyn ar gael yr ysbwn Should mae'n ddydig. Mae'r cyd-ref yn haltyllau, yn y dyfach fel yn dald yn gwiriau cael widerau sydd wediogi� Aissa companies yn hyn rhaid gyda tyl iawn ffordd ac yn ymdweud o'r team internol yn ymdweud, ac mae'n dweud ymdweud o'r cyfnodd yma o'r asetau a'r cyfnodd o'r cyfnodd o'r asetau final o'r ddweud o'r ddweud o'r cyfnodd o'r asetau sy'n gallu byw, o'r SFP, Google Drive o'r cyfnodd o'r api cyfnodd o'r cyfnodd o'r content. Yn ymdweud, mae'n dweud yw'n dweud yw'n dweud, ond y gweithio'n gweithio'n gweithio'n gweithio'n gweithio. Mae'n dweud yw'n dweud yw'n dweud y cwmpholio, maen nhw'n dweud yw'n dweud yw'n dweud yw'n gweithio'n cyfnodd. Mae'n dweud yw'n dweud o'r cyfnodd o'r sèl o'r cyfnodd o'r hotel o'r byw o'r ymdd押 o'r cyfnodd o'r cyfnodd. a mae'n ddweud y pwysig o'r ffordd o'r ffordd o'r platform online, ychydig yn ymddi'r ffordd. Ychydig yn y proses yw, o'r was, a'r ddweud, ond mae'n anodd y ffordd o'r newid ymlaen, o'r ddweud ymlaen o'r agence digital, o'r ddweud ymlaen o'r ddweud, o'r ddweud o'r ddweud, o'r ddweud o'r ddweud o'r api, a oedd ydych yn gweithfem Continue Ltd ond y silicone yn ystaf двеch eraill, a'r ddweud plwyd wedi cael bach gennym aver gan dewis eu lle resulio, ac mae'n cwmόch gofo fyddad ynsoedd sy'n gofyn a'r cy initiallyfa ar y cystead swydd, es broniol dialogues, yn cael bod ein gweld cyflym enw. G patchesio ar menu yn y yar,段 os os foundeyn am na guddur pal pork, ydyn nhw mi ako'n ein cymdippen i'r tryn amdweud ein ffordd. Y cwmpennu, dwi'n meddwl, y cwmpennu wedi'i ddweud y ddechrau. Felly, mae'n meddwl i'r ddweud y ddweud, a'r ddweud y proses arall, a'r ddweud y proses arall, a'r ddweud y proses arall, a'r ddweud y proses arall, ac mae'n meddwl i'r ddweud. OK. Nid yw'r ysgol yw'r ysgol yma, ydych chi'n gwneud yma, ydych chi'n gwneud yma. Y ddweud yma, yw'r ysgol yw'r ysgol, yw'n meddwl i'r ysgol api'r ysgol yw'r ysgol, a gan Chopmar, a dweithio mountedrach iLYshol. Mae'r outreach ond ystod ár hiệnod ti wedi dwylau gwahanol ffgrin equalitytol....to liiwch senin smiled troquin. CyddiPLE i ddim ni dod...... dess ei w idle ooku. felly f liv emerging sydd byn syniad yn fwy......gyrchu ei nu iawn popethio ples sprwys하시는......u d Enfin reflection ar 보eth y whole.<|te|><|transcribe|> The third thing was.oli i ddim ni'n cael chefio am Ysgolfyrddion......yfn siwyddin i same eisiau. degu settledisio bodkin. closeri dd ET A i eu bod yn wychosion i wneud amrylo прис Wales ac yn dod arall yn Wymell Logfeyrding Sir, yr oeddtwno ni i chi wedi pam llwyddiol, pronoun yma i ddech parteigol,茴garn 자edd o modnwch i'ch meddwl sy'nious yn hakkog wedi defnyddio byddwch i wein wait Pa Sp indeed So mae'r cyfeidliad arrange sy'n gwasana f Marchegiani i teimlo. Mae'r cwestior icio llawer o ddyforus wahanol gydaÔc y mae'r proses gydaents screaming nes yn unwyd. Okey, ac y gallwn ffordd i ddim yn ddiwethaf er mwynogi hynny ti'n rhy Ei Roedd yma y Osac, sydd nifarod y Gweithio Roh feedback ar geii Gwylmar Shepherd� polygon diger gan Llyfr面ffynau, stygiad yn iawn rhumi fwy sehrfwynydd, yn yhe카�u'r cael選g, ond galwbwch gyda'n edrychi'r gwir, a'r pethau cyflanddau sydd gwirau ac ddimvo insan Dunedig Cymfrynau wych, a je byddai bod jim裡ddai sydd wedi destudieni anghaes hi, neu ddim yn un i dnynnu rhan o desgwyslu rhynydig iawn a oed塞i. ymlaen i'r proses a'r ddweud y cyfnodau ar y proses coedl. Rwy'n gweithio ar paramedd. Rwy'n gweithio ar paramedd 1.8. Rwy'n gweithio ar 1.9. Rwy'n gweithio ar ddatabas y modelau skel acmi. Rwy'n gweithio ar y beta version 1.2, ychydig yn cael ei wneud yn cael ei wneud. Rwy'n gweithio, rwy'n gwneud yn cyfyrddio'r cash o'r cyfnodau o'r OI M. Rwy'n dod yn gyffredinol pysylltu. Rwy'n ddweud, rwy'n bywch chi'n ymdweud o'r ynglyn â'r ymdweud i ymdweud o ymdweud o'r ymdweud. Felly, mae hi fe wnaeth aslo duod ar rhangol科 iddoolygutersio oherwydd mae er mwyncodean yna ar gyfer y F1 ac E2. Felly, mae hyn i feddwl hwn nad oer i chi i el twitch 360 standby. Fodd gweld bod hi fathby Uni Men Trans 39, mae hi feidio mewn Following. Mae hwn ar wellowwyd yn prent ast juicy a myfyddiol yn erbyr iecheryd cyflogol ynglyn a h שלhaeth yn brน. iawn yn oed yn fwciech gyfan, ac iawn yn holl yan ers un Lara mafio y cystro. Gyfan le moddiaeth gallon, 재ugech weld bawb y ddechrau CDAY 신 Starkol<|cy|><|transcribe|> Felly bod rhyw fydd genna cefn... maes eu hyfried yw mewn anfertyllion ac efallai ridooooo. darling'r ymwyaf yn maes gwybeth yo didd Engineer ywCH Israeli Mae genious'n brôn Coffredwyr ar gyfer eich ystodol hyn i rywbyruden o'r proses 1970 ac ei fod yn cyfathio sefyll. Mae originally erby holder ha檢 o gwyllfa y dyfodol. As usual, GitHub, Travis, Elastir, with LogStash to centralize logins, and sentry to the front end to get A-horse and to manage the failures of the front end, and for the back end, New Relic. And it's like for notification. Okay. So let's start with the infrastructure to have an idea what we did. First of all, we decided to use from the beginning AWS. We worked with three environments. So we have one development environment, stage environment, live environment. So in this, we have to create this infrastructure for each environment. We decided to put it in a different region. And we create private network. That's the standard of having your private network and put your servers inside and have access only through bastion and VPN. So we make it secure. You also put in the front cloud file to make sure you can have this high performance to deliver, especially the front end and the images. So we use a lot of services from AWS. Also, we use Hades and also the security groups, everything that can help us to not to have the best security and at the same time the best performance. One of the great things that we did not use from the beginning, but we discovered in the last month was this tool from Kubernetes that's a K-Ops. I really recommend you to check this tool, Kubernetes Ops. That's a tool that you can easily create Kubernetes clusters with just one line command from Google or Azure or AWS. You just can say, okay, I want to create a Kubernetes cluster with one master in multiple availability zones and a number of nodes you want to put in the cluster. In the end, a solution will be something like that. You can have these different availability zones and you're going to have a VPC created automatically, supernats, everything that needs to be done to create this infrastructure inside Amazon. So this also can be used to use software defined network solutions that Kubernetes support. It's really something easily to reproduce, to recreate your environment. You can also export the output. You can run the script, the command to directly create the infrastructure or you can export to a cloud formation template. That's a tool from Amazon that you can also, after using this file to create a new environment. Here, just a simple overview of how Kubernetes work. We have Kubernetes as a recipe and a lot of services to manage authentication, logging, application, deploy, and workload. So you just need to say, okay, run this container and Kubernetes will take care of running multiple instance. If your instance crash, start again. If you want one of the nodes crash, the application will be starting another node available. So you don't have to care about these things. Outscaled and also on top of this, we have DICE. The important thing about DICE is really make easy to work with Kubernetes. We just have a command to say, okay, I have this repository with a Docker file and a small change. We have a proc file also to say about the process that's going to run it. And you say, okay, please just deploy this during git push days and the name of the branch. So if you want to deploy a different branch, just change the command. So in the end, in the back, and what's going to happen. This you go to the build phase that if the container was not built before, you're going to build the container and then start the application and do the blue green process to put the application up, check if the application will be really running properly before do the changing between the releases. So the user will not note the change at all if it's proper, if the application is properly configured, you just change the release and you can also do it way back if you want. You have the history of who did the deploy and how many releases you did, which version and everything is controlled by DICE. So, and we have another tools also to help us in automation. We use Ansible to manage more security groups rules, creating databases on adress and creating database instances and managing, installing some tools inside the nodes. So making sure you have always homogeneous environment in our service about the users that can access by SSAG keys and things like that. About New Relic, this was from the beginning an important thing in the process to put the new relic configuration in all the applications. So we can know from the beginning in each environment what's going on, if we have errors, we can also trace, see performance issues and things like that. And we have automation about these tools to send alerts on our Islack tech channel. So it also can be notified by mail and by Islack what's going on. For example, if we have a high error level on applications or something happened wrong on the service also. So if you have a performance issue, memory disk or something like that. OK. Now let's talk about code. We create in this year of work, we create a lot of code and the most important we create a shared base code that's really something you want to, probably we are trying to make it open source because I think it can be really useful for other people. We create a package that we call common that we need to split it in the future. But this package manages all our code about events, so event class and interfaces and also event workers, workers that the workers that you read a message from the queue. Logging, we created from the ground a workflow library to manage our internal workflow process like we have in Plong, but specifically to work together with the models, the SKI work models. And we define base models in common and mixing classes to be used to compound different model types. We have this package that's brief.wis that we use Pyramid to create REST APIs to make it easy to create the resources validation error handling and some tools to work with Google Drive and S3. So just going to the details, so brief and common. The workflow engine, I said, events, the base models, also with versioning and local holes, we create a really special way to define local holes and to create dynamic, these attributes in the models. We also have cache tool because we detect that one of the most intense work that we had to do all the time to make a request was to serialize the objects. So this was the performance bottleneck. So we create this, we're using dog pile cache to automatically say, okay, every time you call this function to serialize, create a cache and we can have a huge performance improvement. And the worker I told about reading the consuming message from SQSQ. Brief.wis, we have the base class to exposure models as API endpoints. So it was really just say, okay, this is my endpoint. This is my model class. And it's done. We don't have to write the collection post, collection get, put, delete. So everything was just to automatically introspect in the model and be able to filter to reuse this generic view to create this resource. One important thing inside this, the rest methods, we find object lifecycle events. So like we have in plan, if one object is created, when one object is changed, we find an event that will be sent to the queue and can be trigger actions that we will show later. And this includes the workflow. One thing that was really important for us that we have, for example, in general, was to be able to create the automatic validation from the model also. Introspecting the model and creating the, I don't know if you use it to parameter, but you have to define the columnar schema. So use a library that does this work. We have to customize this library for our use. But this really helps us to not have to rewrite this information again. Biff Debreste also has an important role in security and computing dynamic ACLs for our models. So we could define which users can read, write, and which situation based also on the workflow status and the filter that I said. Some productive tools we used together with this was a library to check style on our code. So to ensure we have really a standard about code and easily checking and help us to fix the failures and put this in our building process. Cooky-cwtter templates to make it easy to create new projects, to create new microservices using these templates, and some Docker images that we created to customize for our needs. About the services, what we have. The main services we created, Leica, that's the business process, the main operation process that does all the user visible workflow. We have Corel Agafor that does the halting of the events. So all events from all microservices are dispatching to this one queue. Corel Agafor, all the time, read this queue, check the event, and decide, okay, when this event happens, I have to do something. I have to send an email. I have to post it and like. We create also a JWT service to do authentication and create a JWT token. We create a library to store our digital information, our information of our digital assets about the images. So in the end of the process we could import this information to our library. Then we are going to be able to query and deliver it after to the customers. One of the important things we discovered during the work is how we can scale our operation process. It was really a great idea about how we can validate and automate things in our workflow about checking image quality, checking size, number of images, everything we define in our process should be automatically checked, helping the curating, avoiding manual work for this kind of tasks. We also create some support services. We use image server handling that's open source project. It's really a nice project we could check because it's a general proposal to resize images, to smart cropping images. You can apply filters and use also IA to automatically tagging and things like that. We have a two-force lac, another tool to integrate with mail, with mandrel. We created a CMS for our website. We use PLON to this and we create a font-end angler to connect to PLON using the recipe. We created a small CIM to have our internal hotel database to contact hotels to manage this information. We created a site map to help to build the site map from PLON to our font-end website. We have a gateway and a hotel for our API. Also, making sure you just point our request to this halter and we can create policies and cache and things like that and avoid how to do it. We also create a website to access to our API that are not allowed. We also create a website to access to this and we can create a website for our API and we can create policies and cache and things like that. We can create policies and cache and things like that. We need to create one or more assignments to photographers. Usually, it should be one, but sometimes we have to change the photographer and then we can cancel the assignment and create a new one and keep the history. When it's assigned to a photographer, we can also schedule this with the content information to go there to do the shoot. In the end, after this, we move this assignment to QA, as I explained, with automatic validation and to be able to be internal approved and then external approved by the customer and delivered. Each time anything happens in this process, changing status from received to scheduled to assigned to delivered to submitted, we have a workflow to manage each step of this process, knowing exactly what's going on, being able to filter in the front end, being able to manage which thing is in which status, so making everything visible. Because this change goes to our central messaging halting, we can index things on BigQuery, logging and despite messages on send and mail and things like that. This product is a component of the front end, that's Angular 4, is a component of the API of Riff Laker and is a component of some tasks that you have scheduled tasks to check the status and do something automatically. For this solution, we have some important domain models we created, all of these models were created using SQL Alchemy, to connect the customer, define the projects, orders, one order that can have one or more assignments. We also have information about the creative professionals, the users from the customer side, from our internal side. Just to have an idea of our internal workflow, our notes, how the process was, defining the workflow, we also created a small tool to create this graph of the workflow from the code, input this in our documentation using its things. This is another thing that we could also, together with the workflow, open source in the future. It was a really intense process of understanding the process, defining the workflow, discussions, what was really amazing in the end, the solution, and how it really makes things manageable, creates a lot of data, a lot of history about what's going on, so you know what's going on all the time, we have control over what's going on in your process. What does it mean in the end? Really, you have events, we have things happening all the time, and you can take actions, you can automate things, and this is really, really impressive. For example, we have Ariana, Ariana is one of our project managers. She assigned, for example, 120 times a photographer to an assignment, and we have the workflow history, and when this assignment happens, this event will be processed and you can take the actions to send a mail, send it like, and anything else. Also, when you validate a professional, and we have Ophelia, that's our finance manager, and she had the job to check the legal information from the professionals, contracts and things like that, before approving validator professionals. So, as I said, brief lake, five events, the events are hosted and briefed address, and for example, you send a mail to the creative. So, that's it. Thank you very much. Any questions? Any questions in the room? What was, from your side, the most challenging part of the project? I think one of the most challenging parts was to create the workflow solution, and also to create the data models together, because we put a lot of logic on the back end sides to make the front end more simple and to simplify the API in the end. And then we have really a lot of investment on this, defined the models. At some point, we noticed that our models were not working for what we need. We have to rebuild, to refactor everything. So, this was really an important change in the middle, but was really necessary and after the things working. I didn't mention at all another big challenge we had, because when we joined the company, they had a solution that was a kind of database online to manage their process. We really had a hard time migrating this data, because we had to, at some point, create a new solution and create some small tools to manage some issues we have with this tool, because we could not go to what's going on. And things started to just falling apart, and we had to, at least this tool had an integration with also a message system that we could receive some events from what people did, and it could create business rules to, oh, when this happened, let's check the consistency, let's make sure everything is in, change things, and it was a really crazy moment. I have a question. What are you planning to release as open source? What's the lowest hanging fruit? I think there is a lot of low hanging fruits. We have the check style component, we have the site map component. The style is stored from the load. The moves are from the design. Okay, but anyway. And I really would like to deliver the API, the brief WS, as an easy way to create APIs. I think it's really nice. And the workflow, I think, because they work together. I didn't mention about the brief data that you can create, not only the rest in points, but you can create also, we have an automatic way to expose the versioning information from the model, and the model use versioning, and exposing the way to trigger the workflow transitions in the model. So we just need also to declare the end point, where is the holds, and which model, and it's simple, the validation if the user has permission or not, if the transition exists or not, things like that.
I will talk about our work at Briefy, a startup company from Berlin that provides visual content on a large scale. When I started working at Briefy last year we had a big challenge: build a new product from scratch to automate the entire business. We built an entire technical solution consisting of thousands of lines of code running in the cloud with only three developers. We managed to create and deliver a set of new components: * Microservices with Pyramid and SQLAlchemy * Frontend with Angular4 * Event dispatcher service handling: notifications, logs, bigquery integration and async tasks * Paas environment with Deis to run our (micro) services in AWS * Workflow library to model our business process I will talk about this case of study from the initial challenge, technical decisions, non-technical issues and with an overview of each component of this architecture, lessons learned and future challenges.
10.5446/54068 (DOI)
Yes, and Balboxia is a Brazilian project which I got acquainted with. I'm a software developer who was originally trained as a theoretical physicist. Since 2011 I have been working for a company in Denmark called Magenta, or Magenta, I think it's the color. The company I work for is specializing in only free and open source software. A client comes to us and says, we want you to develop some software for us. We say, yes, we can write a contract with you only if you want it under a free license. Our mission statement says we can't do it otherwise. That's a nice thing. Personally, I met the free software movement in the concept of free software back in 2005. Started blogging about it, started organizing groups around Ubuntu, and we had a free software foundation Europe group in Aarhus in 2012, doing monthly meetings. We had it until 2015 where we organized the LibreOffice conference for 2015. Afterwards I got other priorities and the other members got other priorities, but that's what happens anyway. In 2014 I went to the first international festival of technoshamanism, which was held in a city of the town called Achial da Juda in Bahia in Brazil. I got there because of the convergence of different movements I was interested in. Technoshamanism seemed to be about free software and free technology, and it seemed to be about reconnecting with the earth, with an ecological perspective, with things like permaculture, but it also seemed to be about spiritual things, like connecting to the ancestral lifestyles of the indigenous communities. My experience in Northern Europe is that these movements are very fragmented. If you're interested, one thing you're not supposed to be interested in another thing, but I was interested in the whole picture. I'm very impressed with that. I met these activist convergence of the free software movement in Brazil, and I met a guy from Italy called Vincenzo Tsozi of INSA, who is the founder of this Barwaxia project. This is what this festival was like. This was the opening of the festival. People around the fire, all these colourfully closed young people, are mainly hackers and artists and stuff from cities like Rio de Janeiro or São Paulo, and some of the people who look more brown-skinned are mainly indigenous Patasho people. This is also from the opening ceremony. Patasho did a huge ritual for us in which everybody was involved. That's an example of this convergence. This is Vincenzo, with another participant in the second International Festival of Technoshamanism, which we organised in Bahia in November last year. Actually, as a part of that festival, we installed a note of this system, Barwaxia in a Patasho village, which is pretty remote, 14 kilometres from the nearest real city. This is Vincenzo again, and he is talking to the chief of this village, and he is actually discussing with the chief of this village the pros and cons of installing a note of this Barwaxia system in the village. Just to give you some context. Barwaxia is a project which is organised by the Mocambos Network. The Mocambos Network is a network of more than 200 African-descent and indigenous communities in Brazil, and it includes some urban cultural centres in marginalized neighbourhoods. Many of them are occupations or squads. Does anybody in here not know what a quilombo is? A quilombo is like a maroon society. It is a society which was founded by runaway slaves historically. The most famous of these quilombos was called Palmares, and it was existing from 1600 to about 1700 for about 100 years. There were about 30,000 people living there in an area about half the size of Portugal, I think. So it was a huge thing. Today there are about 4,000 surviving quilombos in Brazil. About 1,100 of these have recognition from the Brazilian government. They have their land and their rights to the land. The Mocambos Network is a technological collaboration between 200 such societies. They were connected, yes, and this is a meeting of the Mocambos Network, where they are doing training and training in free software, exchanging ideas, also doing cultural things like music and dance, but also nerding with technical stuff. The origins of this network is in a place in Campinas, which is called the Casa Cultura Tainá, which is basically a culture center, and the Brazilian government's program for free software in the 2000s. What happened in the 2000s was basically the socialist Lula became president, and it was not only the first time, there was a person who was president in Brazil who was not from the elite. He started a program to empower the traditional societies. His minister of culture was Gilbert O'Sheel, the very famous musician. He went to a large conference in São Paulo, where all of the free software movement and street artists in São Paulo were, and he fell in love with all this technology and free software, so he talked to them. Why don't you, and also the idea of using free software for educational purposes, why don't you come to the minister of culture and you do a project to install free software all over Brazil in the traditional communities. The only thing they do that's programmed to install basically Linux labs with multimedia production software, educational software all over Brazil, the people they hired to them were like the core of the free software movement, like activists, people with long hair and jeans, and they were sent out to the traditional communities. What happened was that there was a synergy because all of those people from the free software movement fell in love with what they saw in the traditional communities. At the same time, the people in the traditional communities, many places realized, we can do things with this stuff which will make our lives better, and because it's free software, we don't have to pay all sort of stuff. So this project, BAUVAXIA, is about building infrastructure for these people, but there is also a serious side of it, and that is the struggle between what you can say, the traditional communities and the elites in Brazil. Basically, due to crime, police files and stuff, there is a massacre of many black people in Brazil all the time. Maybe not an intentional genocide or targeted massacre, but if you look at the statistics, who are shot and who are not shot, black, white, there is a trend. There is also the economic growth, which is taking all the resources. There's not a lot more land for agriculture. There are a lot of agricultural exports to be had. So indigenous communities, quilombos, they have huge areas where they let the things grow, the nature grow, the forest grow, and the landowners that surround them, many places think, why do those lazy people not cultivate their land? Think of all the nice soil fields we could have there. So they are under pressure from the surrounding society. So it's also about maintaining the traditional culture in a setting of pressure from the outside. So what is Babaxia? Babaxia is a distributed network that works with or without the internet. Basically, it's about being able to produce digital content and store it in a network in an archive, which can be communicated or transported, which can be propagated to other communities, shared with other communities. So the mocampos network, which I am going to try to show you, this is the mocampos network, all of those dots. If we zoom into it, we see how they are dispersed in Brazil. And there are two observations here. One, this is really a collaboration of communities from all over Brazil. Another thing is that if you notice some places like up there in the Amazon basin, up there, out there, these are very remote, rural locations. It's not like they don't have internet, they don't have cell phone coverage. It's more like they don't have that and they're not going to get that, like ever. So if you want to distribute content with them between those, it must be offline. So Babaxia is an offline network for sharing content. The idea of calling it Babaxia is you combine Babab and Galaxy. It's the Galaxy of Babab trees. Basically, mocampos is a network and each of the nodes of the network is called a mucura, which is the fruit of the Babab tree. So it's a network of fruits of the Babab tree. A Babab tree is an African tree which can live thousands of years. That means that for talking about history, the Babab tree is an important point of reference. If you have a 2000-year-old Babab tree near a village in Africa, then they may have local traditions. This is a Babab tree where there was an important battle 1000 years ago. 500 years ago, there was an important wedding here. So it is like a focal point for the ancestral history. So the Bababaxia is the audacious project of wanting to plant Babab trees for the conservation of Afro-Brazilian cultures for the next thousands of years, of moving the African ancestrality to Brazil. Like I said, it's the symbol of the communities shared memories, the Galaxy of shared memories linked to the land. This is a film which I will skip because I don't have a lot of time, but we talk about mucuras. A mucura is the fruit of a Babab tree. It's also a word for the nodes of the Bababaxia network. Each community has its own mucura, which is a computer with a Bababaxia system installed. You also have mobile mucuras based on pendrives, hard drives, disk or laptop. So a mucura is not a server. In a way, it is, but this is a mucura. It has a figure of one, and then it has a Raspberry Pi inside. Then it has a wireless card, and then if you turn it on, it has a wireless network, and you can access it. It's closed and it's offline, no connection to the internet. So this is digital land for the African culture to grow. So this is a conquest of digital land for the communities which gets this implanted. The content will reside in the communities. It is stored in the communities. It does not depend on the cloud. Basically, this is about replication. So basically, actually, the content of all Bababaxia nodes might be in all of them, but it is being replicated slowly by physical transport. So how does it work? Technically, we have some software infrastructure, which is basically built on synchronizing metadata with Git and moving the digital media with Git annex. So you have a quick transportation, which is a metadata and a slow transportation, which is normally by physical transport, which is done using Git annex. So this is the architecture, basically. This is a data center node. It is done for storage of cultural heritage. This is like an education center where they have annual conferences and annual meetups where they do training and stuff. This is an activist driving around on bicycle, and this is a quilombo. Then the data is being transported like this on bicycle, in a car, in a canoe. If you want to go to places in the Amazonas, you can only arrive by canoe. But yes, I am sorry if it is a bit confused, but I don't have so much time to explain it. But going to the next thing, the architecture is built on Git, Git annex. That is a real backend, a real database. We have some middleware, which is Python and Django REST API. Basically, if you have the backend, Git and Git annex, you can always recreate the Django database entirely from Git and Git annex. The users are also checked into JSON files in Git, so the system is also completely federated. If you have a user in one Mokur, you have in all of them, if your user has been synchronized and you can do things or not according to the local rules. The front end is built in HTML5 using underscore and desks.js, backbone.js. The system is working, and the foundation with the Git and Git annex is completely solid. We have confidence in that. It is installed in more than 30 communities. There are 45 nodes of the system, but some are developer's nodes and experimental. However, the front end is low and difficult box in the Django layer. We are thinking maybe we should rewrite the front end in Angular. We are thinking maybe we could reimplement the backend without Django, in Flask maybe, and work directly with Git annex without having the Django layer and use Solar Search or something. Maybe even in CSC++, that is Ventsus idea, so we could put it on really low end devices. Next week, we are having a sprint or brainstorming in Benevento near Napoli in Italy to bootstrap the next phase. Everybody is welcome, but now we use it to preserve and diffuse Afro-Basilian and indigenous cultures heritage. We could bundle this with radio, video conferencing. We could do functional text, like saying this is for heritage. Please take this data to a heritage mucure. This is private. This data does not leave this mucure. This is documentation important, like scans of deeds for the dand. Publish everywhere, transportation services, postal services, WordPress plugin. I will just do a quick demo or just show you at least how it looks. If it wants to do like I want it to do, it is just slow. I have it here, but it is for the other resolution. It is not to be afraid to look at. This is a multimedia archive. You have text, you can see. These are the nodes of the system, mucuras. If you look at one of them, like Hindler, that is my own development mucure. I am not accessing another node. I am accessing the local copy of this node on this mucure. In principle, all of these nodes could have copies of all of the network. I think I just realized my problem here was that I forgot to log into the wireless, but it does not really matter. If you want to contribute, how do you do? Go to our GitHub repository. If you cannot see this, I will tweet the slides after this presentation. We think that this concept of federated, eventually connected networks is actually an interesting idea in computer science. It is an interesting problem and a real world problem for these traditional communities. We are always looking for ways to find funding because some people, like I personally, are finding it difficult to do full-time work for clients and develop pro bono on the site. We are trying to find resources. Anyway, thanks a lot for listening. Thank you, Kastin. We have time for just one quick question, if you have one. Otherwise, just one note for everyone. We are taking the group photo right now at the stairs outside. Just follow the people and get to the photo. Thank you, Kastin.
Baobáxia is a community-built project to connect about 200 Brazilian quilombos to assist the interchange and preservation of traditional, community-built culture, under especially challenging conditions including remote locations without access to the Internet. Baobáxia is built in Django with sqlite or postgres (middle-end) and Backbone,js (front-end) with a back-end in git-annex, which handle the synchronization between different nodes in the network. We're currently considering the possibility of rewriting the system, e.g. with an API in Flask directly on top of git-annex, skipping the Django middleware, and a new front end in Angular.js. The talk will present the system and the movement behind and will discuss the pros and the cons of the possible paths for future development.
10.5446/54071 (DOI)
Okay, hello everybody. I will try to be fast to gain some time that I lost here. I'm Alessandro Pisa. I studied as a physicist, but something went wrong and I started working with clone. Sorry, can you hear me? Okay, so I was saying that, I was saying that, okay great. So I was saying that I started working with clone something like 10 years ago and since the past year I was working with SysLab and I quite sometimes was the migration guy for migrating websites. So migration is just a matter of state and you go from state A to state B where you have different servers, different machines, different software versions or different software applications. And of course the customer does not care about how do you get from A to B, but you do because all the cost of the migration is up to you and your company. So sometimes the migration proceeds in a well-defined path and the cost can be predicted and some other times where a thing happens and you have to go out of the beaten path. And of course I have a story to tell you about a particular migration where I did some weird things to migrate my site from state A to state B. So the state A was that we had a Plone 4 1.3 site and we had to reach version 5.06. We had the custom UI and we wanted to upgrade Plone and install Quave after it is the latest Plone 5 version. And we had a different set of add-on from the two states and one of the important things was that we wanted to get rid of archetypes and switch completely to dexterity. We also had Solar 3 in the original site and we wanted to upgrade to Solar 5. So what does the standard theory says about that? That you first should upgrade to the latest Plone 4 3 version, then to Plone 5, then install Quave, then call ATC Immigrator to change your archetypes content to the dexterity ones. This is first class travelling because Plone out of the box gives you great tools to do that. But then we have some tricky parts. Of course we have to upgrade Solar. We also have to make the add-ons compatible with Quave and install them, install the new one and install the new one. And maybe there are some compatibility issues and so this is quite tricky. And so your first class travelling starts to be a low cost flight. But there is another part. The data.fs was not that big, it was 10 gigabytes, but we had 300 gigabytes of blobs. And this is tricky because we had 200,000 archetype files, each one with previews. So for every file, for every PDF we had a preview of the pages and we also had versions of those files. And yeah, also 5,000 users that were using that site in a nasty way for years, mind you infected your data.fs with nasty things. So we made a rough time analysis of what could happen if we just follow the standard path and just getting the fresh data would take hours just to sync the data.fs and the blobs. Upgrading to Plone would also take hours. Upgrading the products will be tricky because of compatibility problems and whatever. And also migrating from archetypes to dexterity would take days and of course you cannot any more proceed in this path. So why it will not work? Because also in the development phase, because you develop emigration, you will suffer because it will be too slow. Also we agreed with the customer that the migration should take one weekend and we wanted in this weekend to do some migration plus quality assurance and maybe fix some last minute problems. And of course if this would not work, we would have to roll back because we cannot block 5,000 users that were using this tool as a central tool for their work. And so we need to cut down the development time to less than one day at least. So we define a strategy. First we add to define what are our environments. Then making the immigration procedure convenient and automatable and identify the bottlenecks and remove them. We have three environments available. Of course the production cluster. We had several virtual machines that were hosting the production data and production instances. And a staging environment where we and the customer could test if the application was working right before and after the immigration. And we had just one server, one virtual server dedicated to the immigration. The server dedicated to the immigration was not that powerful. It was just 8 gigabytes of RAM, 2 CPUs and quite some space to store the immigration data. And we decided to perform the immigration using a view. Why a view? Because it helped us to organize the code. In the, each function called by this view would be a convergence step. So we could reach the important states while doing the immigration. And it allowed fast development because when developing, we just called the plumber load and run the immigration view and the modification were applied immediately. Another thing is that if you wanted to call this view to a script, to a bash script, we could just make a cool query to our plon site and the crew will call the view with the proper cookies for authentication or whatever. And this is better, my opinion, than using a script because every time you modify it, you have to kill the instance and run it, it will take time. So how the immigration view looks like. The code method is just made of several codes to different steps. Each step should be convergent. So it should check, it should run and if it should, it will make some modification to the data FS, then log what it happened and then commit. So after each step, there was a comment. What did we achieve with this? That we had a well-defined, reliable and clean upgrade pass for our data FS. So it was already one step because every time you make immigration, you probably have to do many things and you don't want to have a checklist on a paper and remember, okay, I have to do this and then you forget and you screw it up and you have to start again. Yes, we also minimized the need for manual operations. We also had another problem. Just to her sync 300 gigabytes of Bob, it's a big task. So even if you have already synchronized it, when you make the immigration, you want the latest blobs and her syncing them will take a lot of time. So the problem is, can we start the immigration without the blobs? And the answer is yes, because using these package experimental graceful blobs missing, you can replace your missing blobs with placeholders that you can override later with the original data. And this allowed us to start our migration fast. So what do we do here is that we prepare the immigration environment and you do that once and you copy the data FS when you need it. So when you have to make the final migration or a test before the final migration and you can sync the blobs in background. Of course, we also decided to disable solar because we will take care of solar later. What did we achieve? But now we can start the immigration in minutes because one single file of 10 gigabytes is quite a fast thing. Can you do it for your own migrations? It depends because maybe your migration procedure must have solar or the blobs or whatever. And if you can do, you can live without missing parts, you have to be able to fix the missing parts later when they are available. So and as you will see later, we will be able to do this kind of thing. There is another problem now, the just upgrading plone through Porto Migrator takes quite some time and there are reasons for that. The first reason is the portal catalog because there are in the plone upgrade that sometimes you add an index, add a metadata or whatever and you have to take all your brains, get the object and reindex it to fix the new brain, the new index or the new metadata. And okay, this takes quite some time and it is 45 minutes. Then you have another index and another metadata and you have to repeat the same thing again. So take all the brains, all the objects and whatever and this will take another 45 minutes. And then again and again. So this was quite a huge blocker and at the end you had huge transactions and you had 8.0 F2 monkey patch this procedure to make intermediate commits because otherwise your virtual machine would die and so we have been bought, really bought. We have decided to go brainless. So one of the first steps of our migration view was to wipe out the catalog. So we took the catalog and completely cleared it and then we run the migration and the achievement was that the plone upgrade instead of running in 4 hours, it took 1 minute and this was quite an achievement because we didn't have to get all the objects from our database. Probably it was just fixing some skins, setting some plone registry and it was all. So can you do that at home? For us it worked because we plan to modify all our objects when changing them from archetype to dexterity. So they will be reindexed at a later stage. But another problem could be that some of the steps might depend on the catalog to be there and so you have to check if you can adopt this solution. But anyway, if you have not those concerns, you will see that you will be amazed at the advantages of going brainless. So we have another problem with the addons. Of course, if you decide to go first to the latest release of plone 4.3 and then to plone 5, you have to run twice the build out, you have to prepare two environments and whatever and you also have problems with changing versions of the addons itself. Also maybe you have to make these addons compatible just for uninstalling them and uninstallation is always a problem because you never know if the uninstallation procedure is good enough to clear everything that you don't want anymore. So again, we've been involved and we went straight to the goal. We decided to provide one build out, so skipping the plone 4.3 latest migration and we just started with 5.06 and our build out was not containing any more than wanted packages. And yeah, of course, now the instance will not even start and so we have to heal and clean up the data files to cope with the fact that we don't have anymore some classes or interfaces. What is the solution that we decided to adopt? We decided to use the alias model function that allows to replace code that you don't have anymore with new one. So this allows us to remove all the unwanted addons from the build out and start already with the latest build out, the final build out. And how does it work? You basically provide a missing classes model and you import this alias model function from the plone up upgrade and you say, okay, for every classes in my missing package or for every interface in my removed package, I just do this model with the missing classes one. And how does the missing classes model look like? It is like that. You can import whatever you want. For example, you can specify your own interfaces or use the fine persistent tools that are not there anymore and eventually you can provide some even some functions there or attributes. And another option was to use wildcard fixed persistent utilities, but of course you don't have the ability to define missing functions that you see here for this missing persistent tool. And so it is a big advantage to use this alias model function. Once you have that in place, your instance is running and you can start removing the enemies. The first enemy are all the broken persistent utilities and it is the code that we use to remove them. Okay, this is something that you can read later when I share the slide. And also another thing that we fixed was the portal setup because removing the packages abruptly left the portal setup not in a very good shape. So it was even quite easy to fix it. And also we cleaned up the skins. Instead of going for running the generic setup XML, we just decided to take the portal skins tool and to fix it and removing all the selection we didn't want and just to set the good layers. And this is something that it's good to have because after years of usage, the skins layers can be quite messy. And okay, there may be other things to fix. For example, portal actions, old portal transform, old portal types, some site properties may be migrated to the portal registry. And this is the point where you want to get rid of some obsolete content that you don't want anymore, which is always a good choice when you want to perform a migration, remove everything you can. So what are our achievements that we never needed to have intermediate buildouts? We focused on the goal and not on something that we will disregard at a later stage. And we have been providing and getting rid of a patch in an easy way because the code is quite self-contained and it's really easy to provide a new missing interface or a new or remove this alias module trick. Can you do that? Probably yes, but you have to know what is happening in your data files and you have to know how your data files should look like at the end of this process. But the biggest problem for our migration was this ATCT migrator, the time needed for the ATCT migrator. And the standard migration is a big problem that it pretends to create new instances for your objects and then the new instances will replace your old instances. This means that with 300 gigabytes of blob as a huge cost and also I'm not even sure that all the old versions are kept and so you will get duplication of your data and maybe you will die before you reach the desired state or the procedure complete. So also this version problem I think it was not solved at the time. What was our solution? So instead of replacing content we went for cosmetic soldiering. So we went very careful and decided to avoid to create new instances for our content but we decided to modify the existing content in order to reflect the new state that we wanted. So we also didn't want to create new blobs because duplicating 300 gigabytes of blobs is an nightmare and they are already there. So being that the catalog is now empty we decided to take the ZOP find and apply script and run it to the portal and apply this self-migrate function that basically does this. It will take an object and the pass because this is the way that it should be called by ZOP find and apply and it will take a migrator that is the correct migrator for the object and based on the portal type on where your object is and when you have the migrator you will call it. The migrator does something like that. It's a really simple thing and it has a call function that changes the base class, migrates all the fields to adapt the new class and re-index the object and resets the modified date because the index in the object will also reset the modified date to the moment that you call the index object. How does changing the class work? Changing the class is you just import a new class and you detach your object from its parent in the ZODB and you change the standard class attribute with the new class and you reattach the object to the parent. Why you have to detach, reattach? This is to win some ZOP caches. I'm not really aware of everything that is happening there but I know that this works most of the time and you can try it yourself with the Python interpreter. You can define a class A with a class attribute A equal to 1, the same for B and then instantiate A and when you call you can even define an instance attribute C equal to 1 and then switch the base class for A to B. When you call A dot A you see that the class attribute is not there anymore because now the lower case A is an instance of B. It has an attribute B which is a class attribute, it maintains, it keeps the C instance attribute and they are both equal to 1. This works perfectly and I really like that Python works like that. It's amazing. Now we have to change the base class but we want to adapt the fields because maybe they have been renamed, you have to initialize new ones and you have to update them because before they were date times object with the camel case and then you want the regular Python date time ones. This is an example, we import the old blobs from the new blobs from the named file and we have a function to migrate the file field. You pass the field name, it will be file most of the time, you take the old file which is an instance of blob grouper, you instantiate the new file and you start copying the attribute that you want to migrate and the most important attribute is the blob and the blob is unchanged between the two implementations. It is always a Zodib blob blob and this means that we are not touching the blob on the file system, we are just moving the Zodib blob object that contains a reference to the path on the file system. And then we set the file name and we set the attribute on our migrated object. Then the final step is to reset the modification date, of course when you init your migrator you want to store the old modification date to reset it at a later stage after you re-index your object. Can you do that? This is something hard and you must know perfectly your content because every migrator should put all the bits in the right place and you have to take care of market interfaces so probably you need some more method to fix this also and you have to take care of the Zodib cache because sometimes it may happen that even if you change the dhunder class attribute, the object after some time goes back to the original class. What are our achievements? Okay, now with this trick the data file is already something like 12 hours. This means that we can launch the migration before going to bed at 6pm and wake up the day after and everything is in place and this is a really nice achievement and we never needed to actually touch the blob on the file system and we never needed solar for the moment and now it is time to fix the missing part. What are we missing? Basically solar that can be migrated in parallel because it is just something that is related to the solar application and migrating solar is quite fast. It can be done in half an hour, you can do whatever you want while washing your teeth for example. At the end when the data file is as rich as the state that we want we can just make atomic updates that basically reindex everything with the new data file data without touching the searchable text which is the index and that takes much time to be computed because maybe you have to take all the PDFs and office documents and convert them to text before indexing. Now it is time to copy the data to staging and production because we want to keep the migrated data untouched by humans only the migration procedures should touch them and copying the data files is fast, copying the solar data is fast and we cannot sync the few new blobs that maybe the immigration installing new products created really few blobs but it is okay you can wait for them to appear even when the QA phase is running so this is not a big problem. So this was our site before and then this is the state of the site that we reached after installing Wave and it was quite a nice achievement and what we get is a big thank you because the biggest achievement that you can have in this case is a thank you from your customer that is happy because they have been really they understood that the value that you delivered and the professional way that you used to deliver this thing. So just as more a cap we made a nice migration planning in my opinion and we always made the worst part that was erasing in blobs and touching them in the background and never needed for any step and the catalog was clear before starting and this allowed a nice pro migration and we used the trick of the modular aliasing to start already with a final build out and we used the class migration to speed up this archetype to dexterity migration and we used atomic updates to avoid reading the blobs and recalculating the sensible text. Should you try this maybe you can decide to apply some of these tips to your particular use case. In many cases the standard migration tools are perfect I really like them and or in many other cases you may decide to go with some other things that are more fit for the case but for example transmogrifier whatever fancy export and import procedure and okay there are also some things that I did not tell you because you can use the dunder class method but you could also use the alias model or this dot db update package that makes something similar also erasing in blobs is something that can be improved and we did and in the migration you may need to also take care of other catalog the reference catalog or you have to take care of the link integrity and we of course made some other minor modification to the data.fs or some other tweaks during the migration. Can we improve this farther yes of course we could speed it up even more but basically the migration started at 6 in the afternoon and I didn't want to improve it to wake up at 4 in the morning so it was enough 12 hours was the perfect time so I really put a slip I didn't tell you that okay in the call. So thank you very much thank you Cislab for making this possible and thank you all people because it's great to be here. Okay I think I run I don't know how much time do I have. Okay so perfect thank you. Yes David. Yes David. Yes yes birthday. Yeah, yeah, yeah. Also, another option that I had was taking all the indexes from the catalog. I don't know, should I repeat the question for the video, for the people at home? No. The idea is to speed up, further the immigration, just to re-index all the indexes that I know that should change. And this is a smart idea, but at the end, the only index that would take a significant amount of time to be calculated is the searchable text. So one thing that I even tried and it worked was to take all the indexes and remove the modified index. So I did not need to do the, to reset the modified index at a later stage. But at the end, I would not modify substantially your migration time. Yeah, so it's a nice point, but then I needed to wake up half an hour before, so. Okay, so, Philip. At present, I would probably be able to control it in three days, at least once. I'm going to keep it in their travelers but I will try it a few days, maybe once I find them. dużo buildings. I do not have to think about this building right now. But sometimes I It might be based on the use of the system that they're trying to understand. I mean, for everything you're trying to understand, you're allowing it to migrate something that is tied to it's goal. Wait. It might be a migration there. I don't think that it's going to change there on the other side of the planet. But the whole migration is going to be forever the same. The first flight there are two planets, one of them is the Cache people migration, and the other is the Cache author migration. And they enable the protection there. Great. The Cache author migration is going to be forever the same. And the other one is going to be forever the same. And also, the Cache author migration, I don't know if I'm going to say it, but I think it's going to be a bit more difficult to understand what that can be done. But I don't have the idea of a goal. Because that is a big part of the data. Yeah. I'm not going to say anything more about that. I'm going to say something more about the Cache author migration. What are your thoughts on how this might make a goal for the Cache author migration? Yes. How would the goal be reached on all the other platforms? Yeah, yeah, yeah. I did the same also with every content type. And yeah, I also, I will share this presentation later. And there is actually a link to your talk that you did in Bucharest because it was amazing, really. And it describes completely the standard path that you should follow and everybody should be aware of your talk. And I'm really glad that things are changing and you have improved so much this migration. Yeah, yeah. But you did my team's thing to change the... Yeah. You also have to thank that guy with the red shirt because I was inspired by a blog post from David and I think it was also Martin that was discussing this for changing the dexterity classes. Yeah, also for David. Yeah, for that. Yeah, yeah, yeah, yeah. But we know that David is a great guy. Okay, thank you. So I was too fast. So you made me too much pressure with these technical problems at the beginning. Sorry? Okay, so I hope I was not too fast and you were able to follow. Anyway, this is a Google Drive presentation and I will share it online. It has quite some notes there. So maybe even when you are at home you can have something that is still usable. Okay. So I think that's it. Any other questions? So thank you again. Thank you.
In this talk I am going to share original ideas and novel techniques we developed to cut down upgrade and migration times. It all was triggered by the challenging task of upgrading a heavily customized Plone 4 intranet to Plone 5 and contextually installing Plone Intranet while minimizing the downtime. Solr was involved in the process. Given we had to deal with 300 GB of data, the out of the box Plone upgrade procedure and the migration from Archetypes to Dexterity was simply too slow to fit the given time constraints. We were forced to explore new migrations paths and strategies. The outcome was a set of original ideas and techniques that can be shared and applied as well to other Plone/Zope projects.
10.5446/55602 (DOI)
Hello and welcome to our world blown day session. I'm Sally Kleinfeld from just Carter a US based company specializing in open source web web technologies especially blown and with me is is a Jeff Bowman he is the IT manager of the Mountaineers and Alec Mitchell one of our developers here at just Carter also a former Plone Release Manager you might remember him from that. So just Carter has been working with the Mountaineers on their blown site since 2013 and we're going to share a bit of that history with you. So with that I'll maybe turn it over to you Jeff for a bit about the Mountaineers history and let me know when you want me to dive into a little tour of your site. Sure thanks Alec. Hi Alec. Alright so where the Mountaineers is a is a nonprofit based in Seattle Washington we're volunteer driven and our mission is helping people explore, conserve, learn about, enjoy the lands and waters of the Pacific Northwest and beyond. We've been around since 1906 so well past a hundred years now. Well established organization. We teach people how to recreate safely. We have lots of activities going that are all run by volunteers so that's that's basically who we are. How we got here was having a website that just partly having several websites out there all of our committees and branches wanting to have their own space and claiming to be the Mountaineers and people being confused about who we were so we knew we had to kind of get down to one website and one that worked well for everybody so that's that's how we how we got where we are. We started like Sally said in 2013 probably a couple years before that figuring out what we wanted and engaging people to help us get there. We started the Project Ernest in 2013 launched on May 5th 2014 have made several feature additions and improvements to our website over the years and recently have partnered with the Colorado Mountain Club. It's a very similar organization based in I guess Golden Colorado Boulder Colorado Denver so they're all over Colorado. I'm not sure where their home base is but but they're certainly Colorado and they're just like us they want they loved our website. They asked if they could basically buy it from us so we now have a partnership where they're we're gonna be sharing a code base and having two websites that work the same but look a little different because we are different and have a few minor differences so that's that's where we are over the past seven eight years and we love what we have and we love what what the jazz card has done for us so go ahead Sally. Alright well let's let's take a look at the site so you want to give us a tour and tell me where to go Jeff. Sure yeah this is our home page and as I said we're a nonprofit so we have our big gala this week which is why we've got that image up here in the home page we've got our blog which is one of the we wanted a center center well there that's we wanted a place where where people could learn about us in a nice way easy to add info we've also got some information about the trip reports and activities you can see those two big green boxes there help people get to find what they want to do quickly so let's go ahead and go in in activities go to find activities you can you can do that that's perfect and while you're talking about this why don't you give us a little bit of insight into how the members or your volunteers factor into the content on this website because I know this website would just not exist or not for literally hundreds of Mountaineers volunteers that create content actually actually we have probably depending on how you count it up to upwards of 3,000 volunteers and in some way many of our members are volunteers too and in doing store chip trail maintenance type projects but yeah our in in terms of adding content of website our volunteers do the vast majority of that they add their activities which include a beautiful image titles descriptions of what they're doing dates are doing it this is a this is you know in clone terms is a faceted search so this is an easy way for a member to drill down to what they want to do or volunteers have a similar menu we did we call them routes in places where they can pick where they want to go and once they've done that you know once they've picked from list looking it looks a lot like this they get to add their activity tell them you know when we're doing it what their committee is all the details about that particular activity and then people can sign up for it so they we also have we have the start and end dates of activities we also have a registration period so they can define when people can actually register and then of course all kinds of you know details about the activity and where they're going map we use map box for our for a couple of map points people can kind of see where the start and end might be we also have we also sell books so if there are books or also a green shells maps is one of our partnerships people can see what books might go with that in terms of a guidebook or if it's a course what book might be helpful in in their book that might have required reading or recommended reading for their course and as part of the when they sign up they'll they can actually add it to their cart so they can have one transaction where they're signing up and paying for everything let's see so I do what do you want to do we want to try to sign up for an activity or just go on let's just go over you know so we so we also first we have a similar thing for courses where people can search use the fastest search to find the course they want Sally how would here if we go to course overviews sure everybody everybody say everybody knows what a fastest first looks like now yeah so we also you know we also wanted to you know one of the goals of our website was to make things you know simple people understand so we have this course overview page we also have an activity overview page where we describe what people can learn what they can do so any clicking on any one of these we call them course activity buttons people can learn more about what courses we offer what they can learn how they can improve their skills so this is a this is a nice feature where if they aren't quite sure what they want they can kind of learn more about the particular you know courses and activities who offer before they go into searching for them and signing up for stuff great I know that or at least I think I recall that after your site launch your analytics showed a pretty good effect on your SEO from from the new the new structure of the site did you remember that you want to talk at all about that yeah yes we use Google Analytics and we were I'm digging down to it I wear that where to have that I had that I had that handy and went away from it let's just do it this way okay I think we nearly doubled up there it is blue past it we nearly doubled our page views from our Google Analytics from our old website to our new website shortly after launch and that's we've also grown our membership we were probably I think about 9,000 members and today we have nearly 13,000 members so we've grown out to the website you know it's helped us grow our membership we've nearly double did double more than double I think probably to this to this part of me for you know since our old website we're probably more than double on our page views at this point but as of shortly after a couple months after launch we were 70% more so yeah it's it's been it's been a big boon to us more people can more people can find us more people better understand what who we are and what we do looks like Sally's looking at a branch page we just illustrate what you were saying about the fact that your branches wanted to have a presence so we essentially develop little subsites for them yep so each each brand a nice thing about these pages is the branches once these were built don't have to do anything all of the content on these pages I mean they could certainly change their the picture if they want to or their little that little note of summary tell if you could scroll down the page a little bit oh sure so that that latest branch news on the on the left and those other portlets on the right with events those are automatically populated so they don't branches don't have to do anything to keep up to date all they have to do are add blogs specific to their branch add events and then those up those two top ones will take them to their buttons to learn about their how they do courses and activities at their particular branch so that's things of course that gives their about pages a nice overview of the branch and how to get more involved with it so what would you which would you which would oh Bellingham yeah bellingham is not our most active branch maybe maybe go to Seattle that's a that's a that'll be a good start contract sure let's see Seattle Seattle yeah smiling faces there every scram if we go down if we go to their courses and activities you'll see they have they have almost all the course and activity buttons there so they saw our Seattle branch that's our biggest branch more probably half our membership are better and they offered just about every type of activity and of course in that activity that we that we have to offer cool one of the unusual things about this site is how tightly integrated it is with salesforce.com which is the your CRM the constituent relationship management system that you use you want to talk a little bit about how that works and and how much that has factored into sort of the success of your of your website you think sure yeah so we nearly every transaction on our website is synced with salesforce so all of our contacts every time they buy something courses activities donations they make are synced to salesforce so we have all that data that we can use for engagement statistics for engagement with people we also especially with the donations our development team has all that information there and they can make grand use of how people interact with us to reach out for donations and all our fundraising efforts we also use it to connect with our email marketing system so that's that's how we actually send out all of our newsletters what else we got we got probably three or four newsletters a month each of our branches we can use that to help them send a newsletter for their branch and our we also use it for reporting mostly for staff but lately we've been integrating our Salesforce data with Google we have there's a connector that allows us to get but we just lost your audio Jeff maybe while Jeff is getting back his audio Alec you want to say a few words about how the how the Salesforce to clone integration works well we've got you back there we go we got you back I need to turn my phone disconnect my phone and my speakerphone okay I was saying or did you hear me we heard most of it I was just asking Alec to say a little bit about kind of the technical technical stuff behind the Salesforce clone connection yeah it's you know it's it's all pretty bespoke I don't think we use any specific kind of clone add-ons for the Salesforce integration we use the simple Salesforce Python clients and then almost all of the Salesforce API calls are done by a asynchronous tasks in a salary task queue so whenever some sort of operation happens through the web whether that is creating a new man adding a new member making a purchase on the site or even creating some content on the site depending on the type of content we create a an asynchronous task in salary that will run you know in its own in its own process that gets triggered once the transaction is committed so that way we don't have to worry too much about transaction synchronization or delaying web responses in order to wait for Salesforce API calls which can be quite slow and expensive and we do a lot of them and then we also have some kind of periodic tasks that run once a day or once an hour that do sort of maintenance and kind of synchronization back and forth between Salesforce and clone so that's that's the basics of it want to say just a little bit about what data is is synced back and forth either either Jeff or or Alec Jeff can probably answer that he spends the time in Salesforce and I think that there's a lot of stuff might be was easy to say what we don't sync from one we you know we we from clone to Salesforce it's base it basically it pushes every time we do something it tries to send it to Salesforce right away and it's everything I would say except our blogs and pages things that are just content only anytime it involves a person signing up something you're adding an activity or a new route place or a new course that all gets all that data gets into Salesforce the benefit that your volunteers who are creating your courses and activities create all this content and then it automatically gets synced to Salesforce which of course they don't have access to Salesforce but they don't need to because they've created the content on a website all on the website we do I think our contacts four times a day we sink back to Salesforce so if our development team has got an update from one of their donors that on our contact info that'll get synced back to Salesforce and we've got a few lists that we use on the website that we can manually sync from Salesforce back to clone a few a few a few picklists and and from a technical standpoint because because Mountaineers is a non-profit organization they can take advantage of Salesforce's non-profit licensing but there are some pretty strict kind of per seat number of seat limits that you have for the number of users that can have access as well as limits on the number of API calls you can make and and so in order to to ensure we can use that non-profit edition without incurring extra charges or doing things like that you know we can we can set up a system where all of these thousands of members have sort of access and profiles and their up their their activity on the site ends up syncing to Salesforce but they don't need Salesforce accounts which is often the traditional way these sort of things work where people are using Salesforce portals which can get very very expensive and have performance issues as well and then on top of that we do these kind of batch synchronizations for certain types of operations so that we can synchronize many many records all at once on a nightly basis or something in a single transaction or a few small transactions rather than doing it throughout the day or in thousands of transactions so so those sorts of optimizations really help with with costs and efficiency yep Jeff you want to say a word just a rough give us a rough idea how many members the mountaineers has how many people are sort of active as logged in members of the plones of the website on a sort of daily basis that that I don't know but we have nearly 13,000 members and 3,000 kind of active volunteers who would be creating content yeah on a daily basis it's hard to say how many there'd be I don't even yeah I could probably I might go to figure that out from Salesforce Alec is there an easy clone thing I'm just curious if you have an idea of sort of daily sign-ups and stuff like that for for all your courses and activities this has been an outlier year of course with not so many in-person activities we made a pretty good switch doing a lot of our stuff online with you know zoom meetings and events and whatnot so and people you know people really do want to get outside so we have been able to do that in our usual small if anything it's helping drive us to be smaller groups outside which is something we desire so anything the pandemic pandemic has kind of helped us with with that also puts nicks on carpooling but otherwise you right yeah we do a lot with that carpooling let's go ahead to do some hiking and climbing let's see what else anything to you want to cover about books or any other parts of the site that we didn't really delve into that you especially want to want to mention either of you actually Alec if there's anything of particular interest I don't know I will the other thing I'll talk the other unique feature of our website that brings us together are the badges yeah yeah I thought that was we know we have all though we have we so we have this thing called badges and you know back in before the on our old website you could imagine if we use badges as prerequisite to register for an activity like you have to have a climbing badge to sign up for a climb we basically in our old website had seven of them one for each branch that offered the course now we have one so you take a course you graduate you get the one badge we offer it's the one badge required for sign up so it kind of has helped us standardize our curriculum across the board and makes it easier to know that if you anybody's qualified can go on any hiking matter what the branches they've got the same level of training we've got the same set of standards so we're not a hundred percent there yet but it's certainly one of the one of the other big features of our new website that's helped us quite a bit few I finally found a badge yep yep and and each of the courses have I found a badge but I didn't actually show it there we go there's a badge each of our each of the courses I got to this by my virtue of coming from this wilderness first aid course and the courses will have rosters of people who sign up for the course in each activity that forms a part of the course will have its own rosters and thus the website tracks and thus Salesforce Salesforce also is tracking what all the members of the of the these courses and activities are doing and the instructors verify that they've completed the things that they need to complete for the badges let's see what else anything anything about the anything about the e-commerce side of things yeah so I guess I got two things kind of mind with with e-commerce one of things we wanted was one shopping cart so people can sign up for a course put that in your shopping cart if they want if they're joining they can have their membership in the cart and if they got books to go with the course they can add those so basically they can add everything to a cart check out in one transaction one credit card transaction so that makes it a lot easier old site didn't quite have all that together they can even make a donation when they're in the cart so that was another another good feature our website right right yeah that was real important I remember to be able to combine any mix and match we also implemented some promo codes and discounts that people can get and apply to different yeah it's important though the one thing we didn't do is being able to sell event tickets which we use event right for which connects sort of with Salesforce but with our CMC project that's one of the things on a dock is to actually be able to sell tickets on both of our website so that's a that'll kind of complete the the cycle of any anybody can buy anything from website and it goes in one cart so right right the the payment processor for the Mountaineer website is Stripe when we adopted Stripe pretty much in the early days of Stripe I remember David Glick being very excited about discovering Stripe and what a what a nice developer experience it was to integrate it and and it kind of inspired us in the future at Jezcarra to kind of peel off some of the functionality that we had developed in Mountaineers and put it into an add-on called Jezcarra to Shop which implements that idea of a shopping cart with integration with Stripe or other payment processors so anyway you were very inspiring and the Mountaineers has always been quite generous in in allowing us to open source you know appropriate things that we that we create for them so that's a nice a nice we definitely like Stripe that's been that's been a good bit of solid processor for us so yeah no complaints there that's all right well I think we can maybe wrap up it was really great to get a little tour Jeff thank you so much for your insights and Alec and yeah everybody head on over to Mountaineers.org and check out all their amazing courses and content if you're anywhere in the Pacific Northwest it's good if you're not buy a book we got lots of good books that's right that's right your area we got them for the whole the whole world a lot of good biographies and narratives and lots of good read too. Excellent all right thanks everybody thank you bye bye.
Jeff Bowman, Alec Mitchell and Sally Kleinfeldt discuss The Mountaineers' feature-rich outdoor education Plone site
10.5446/55604 (DOI)
Hello and welcome everyone to our world blown day session. I'm Sally Kleinfeld. I'm with Jess Carter, which is a US based company specializing in open source web technologies, including blown of course. And with me is Lauren Drummond, who is the digital content manager at the Washington Trails Association and Dean Erickson, who is the information technology senior manager at the Washington Trails Association. So as Jesse and I described in the last blown conference, the WTA blown site has been cruising along for about 13 years, which is not a record, but it's a good long time. And we thought we'd share a bit of the history with you with a particular focus on the role of volunteers on the site. So with that intro, could one of you start off maybe with a brief history of WTA and the site and etc. Sure, I would be happy to. So WTA has been around for more than 50 years. And we were established in 1966 as a way for people to share information with each other about hiking trails and public lands advocacy. And we started as a little news press, a little hand crank press in a barn, and have since become the largest state based hiking and trail maintenance organization in the country. So our website was created by volunteers back in the 90s, partially because we're in Seattle, we had a lot of we've a really large volunteer and stewardship spirit as part of our organization. And a couple of very savvy volunteers are like, you know, the way that we're doing some of this, it's not efficient, it could, it could happen better online. And so they built our website, they built trip reports, which are a thing away for people to tell each other about trails and the conditions on trails. They're like, we could do this better online. And so I believe our first trip report was filed in 1996 online. So pretty early days. And similarly, we had some volunteers built a custom volunteer management system way back in the day. And it was to help, you know, get work, work done on trails. We had now we have I think over 100,000 volunteer hours, specifically 150,000 volunteer hours a year, hundreds of trails across the state, you know, it obviously started out much smaller almost three years ago. And we have lots of different kinds of trips, we have day trips, we have single, you know, single party trips, we have back country trips in the summer, we're about to launch those tomorrow. And so we have lots of different kinds of trips for lots of different kinds of people all across the state. And so we needed a really good way to connect people with different trails and control work and volunteer opportunities. And so a number of years ago, we took this volunteer built system. And Jeff Carter helped us build a very, very custom system and clone that interfaces with clone to meet all of our needs. One of the things that we really have is we have a lot of custom work. And like to do things in really specific ways for really specific purposes to meet the needs that we have with all of our land manager partners and all the different kinds of people who are hiking. And so, yeah, so that's sort of the very brief history of WTA and the work that we do and how that has rolled out onto our website. So we've got a number of changes. Yeah. Yeah, did you want to add to that, Dean? Okay. I think it's kind of an amazing ecosystem of working with Jazz Carter as WTA has over the years. It's an unusual and actually really great relationship of understanding kind of what we're trying to accomplish as an organization and making those customizations that allow us to make progress. And the website itself and the integrations that we have with other tools really have liberated us to do more work and better work in the last years. So it's great. Awesome. Maybe we should turn our attention to some of the ways that the website clone itself has sort of helped with volunteer engagement and contributions on the site. And I'm going to share a screen at this point and Lauren and Dean tell me where I should go and you can talk about the things that will show starting off with the homepage. Yeah, so this is our homepage. When WTA, a lot of people, if you Google hiking Washington, that's the way that a lot of people find us and connect with the work that we do. We do a lot of different kinds of work. We do advocacy work. We help bring funding to every sort of level of land that we have. We do a lot of stewardship work on the trails themselves out in the field. We work with connecting youth and educators with the resources that they need to get out. And so all of that like engagement sort of starts with the website. People often find us and they often find us through the hiking guide and our, maybe you could start on the hiking guide map, Sally. Yeah, so this is one way that people find us either here or we also have an app that was developed by volunteers, which is pretty great. And if you zoom in or out here, and this is a way, this just represents a bunch of the trails that are in cities that are in states across all different kinds of lands. And it dives into our database of hikes. We have almost 4,000, I believe it's like 3,600 hikes across the state with detailed information. And so often this is how people find us. They might also run across us when they're out on trails. And when you dive in, and so this is Little Spokane River, and you can see that little hard hat at the very top of the page that says that we've worked on that trail pretty recently. And so we've had trail crews and volunteers out on that trail. And so people might actually, some people run across us for the first time on the trail. Most people run across us through Google searches and going out for hike and wanting to sort of get outside and de-stress, especially in the last year. Trails have played a really big and important role for folks. So if that's the case, you may also want to see how can you get involved, how can you volunteer. And so we have a whole other system that connects this monumental database of trails, the 3,600 trails with the volunteer opportunities that we have. So and then if the volunteers, yeah. Well, these are all of the upcoming work parties that we have. And like I mentioned before, we have work parties across the state. You know, there's a map view here. And we run work parties all around the year. Almost every day, or almost every week of the year, we have work parties. And some of them are single work parties this upcoming Saturday at Coyote Wall. We're going to be out there and work there. And we also have work parties that last a week long, or that you go into the back country and you need certain skills for. And so this just helps really prepare folks for what it's like. One of the things that we like to say is that no experience is needed. And we really like to prepare people to come out for a day of work party. So when we did our system here, we thought a lot about what would you want to know? Like what do you need to know before you join us on a work party? And we really try to create a really safe, welcoming environment. So, you know, obviously, how you get there, what are the kind of things that you need to have? We've also created a cool system that's going to help us create the system where all of the documentation that you need, you can fill out ahead of time, you know, create an account, sign a waiver, which we built several years ago, but in the last year, we really came in handy of the requirements that we needed to, you know, keep six feet safety protocols around COVID. So all of this infrastructure that we had in our baked into our volunteer system online really helped and we were able to like make a few adjustments. And Dean was more involved in that than I was in the last year to account for COVID. And so we were actually able to get to work on trails a lot faster than we thought we could. So, yeah, that's really awesome. Yeah, how about talking a little bit about the content that that that your volunteers create? Like I'm I know that trip reports that you haven't talked about yet are important part of the site. Yeah, you know, like I said, trip reports have been a part of our work from the very beginning, people have always, you know, that was sort of one of the purposes. And land managers use them, we look at the data around trip reports to anticipate use in different places so that we can work with our land manager partners around that. So trip reports are user created content, we have over almost 150,000 trip reports, I think, on the line since, you know, since 1996, obviously, there were more beforehand. But they are, you know, regular hikers, folks who are coming for the first time or who have we have some trip reporters who have more than 1000 trip reports, they've been on more than 1000 hikes and then filed a trip report. They're incredible. That's amazing. And it's, I mean, it's really just this incredible sense of a gift that Washington hikers give to each other of, you know, it's the take a penny, leave a penny. Okay, here's what I saw. Here's what you should go prepared for. Here's what the water or there are some hazards here or just, you know, I had a great time. I proposed to my, you know, my girlfriend and we get all kinds of amazing stories. And as well as like really good data and really, you know, sharing information from Hiker. And I think a lot of trip reporters don't think of themselves as volunteers, but in the act of creating this piece of content, sharing their photos, sharing the data that they had the experience on their trip report that really, they really have. So that's, that's sort of the other piece. There also is a photo contest, I think that so that's another volunteer donated. Yeah. And it's kind of, it's sort of a win-win. There isn't a lot of motivation that we really put out into the community to, you know, send your, your terrific photographs from on the trail. But a lot of people do. And it's just, it's kind of a virtuous cycle. I mean, people sending these photos and we're able to use those in the context of our work and I don't know, people, people love it. Yeah. Yeah. It's, it's an amazing collection. I have to say that the, the, the huge number of photos on the WTA site are one of the technical challenges because storing them and serving them took some, took some special doing there. So many of them. Yeah. It's true. I think between the trip reports and the photos submitted there, you know, and some of the form interfaces for photo contest too, but it's, we have a tremendous amount of content for an organization of our size. We are, it's sort of unusual in that the visitorship that we have to our site and the amount of content pieces that we have is much more outsized than a lot of other nonprofits who really have just, you know, focused on the work and not necessarily doing this content service, this information service. That's the, oh yeah. Yeah. It's incredible. And, and of course the volunteers that submit trip reports have accounts and have a certain place where they can submit the trip reports. So the content is managed, you know, by role according to whether you're a site manager like Warren and Dean are, or whether you're a volunteer just submitting a trip report and, and et cetera, there are, there are a number of different roles. I believe that volunteers also can contribute to the hike descriptions themselves. Maybe you want to say just a word or two about that. Because that's kind of an amazing, that's just the body of information about hikes. The official hike information is incredible and is enhanced by volunteers as well. Yeah. Way back when we, you know, the origin of our hiking guide started a little bit as with the spirit of a wiki that people could contribute to it. And since then we've evolved it and we now have, you know, staff, like a specific staff devoted to managing that content and a cadre of volunteers whose official, you know, volunteer role is collecting really good data, mapping trails, making sure, you know, that trails, that the trails descriptions that we have, but we still rely on folks to tell us like these directions are off, something has changed, the trail ahead is actually moved, you know, which it doesn't happen a lot, but it does happen. It's really important and we lean on our community to help us keep informed about that. So there's always been baked in this way to say like update this trail. Or I think especially we've really been building out our city trails and urban trails and parks, you know, the last year, we, it was something we were working, we've been working on for the last couple of years to one of our campaigns, the trails next door campaign, but the importance of it has really grown. So that's something we've been really seeing people contribute their favorite neighborhood trail. Yeah. And I believe you created a fairly specific process for people to submit amendments to those hike descriptions and get them reviewed. And so that that was a that was a somewhat custom developed feature that for that use case, because it's not just like everyone can dive in the wiki model sort of broke down after a while. Yeah, it's a really structured review, yeah, review workflow in the back end. Well, so we've been talking about content so far and the enormous amount of content that volunteers contribute to the site and all the value that that that provides. But in addition to that, you mentioned being in a very techno centric sort of part of the world in Seattle, with a lot of, well, some amount of people who are able to contribute even even very specific technical features to the website. And, and maybe we could, you could talk about them a little bit. We saw one of them, I think, when we looked at the hiking guide map there. Yeah. So I think, you know, we've had this, I think probably the best example for us is really the app. And, and the mobile app you mean the mobile app that we have, which was, you know, developed in a, I think a hackathon years and years ago by some Microsoft volunteers. And then we had another really amazing volunteer who still sits on our advisory board and say, hey, I really want to bring this to iOS and Android. And then did put together a team, managed a bunch of other volunteers. And, you know, something that's been really cool and amazing is sort of this dual, you know, we had a volunteer designer interaction designer. And, you know, we have we sort of work with folks like just Carter as well as volunteers. And then we have a lot of volunteers who do things like user testing or advisory capacity for us. Just, you know, so we have the advantage of using people for their, their brains on things, you know, but we're actually implementing what we can with the budget that we have. So it's, it's a, it's a really interesting process that's always evolving. And, you know, yeah, it's just, it's very cool. And prior to joining WTA a couple of years ago, I didn't really understand that nerds like to hike. And, you know, it's interesting there. I think there is a real Venn diagram of, you know, of people that are, are deeply interested in our mission and also have some great technical skills. And even if they're not sitting down and, you know, writing custom integrations for Plon for us, they have really good understanding of capabilities out in the real world. And for a nonprofit to, to have that kind of, I don't know, partnership with the technical savants, it's terrific. I was just going to say, I mean, even in the, you know, getting feedback from folks when, when we hear that something is broken, it's always with a really generous spirit and often very good documentation. And I know that as far as the mobile app was concerned, that that's a great example of where it's a fairly, it's fairly outside of the, of the Plon world, but it needs the data that's in the Plon world. So WTA had a Plon developer develop the API. Plon comes with an API, but this was a somewhat specialized API to, to support all the features of the, of the Hike Finder map and Trip Report creation. And of course, integrate, you know, the app can just access all the photos that you just took on your phone while you're writing the Trip Report in the field. And, and so WTA was able to have Plon provide an API that the app could talk to. And then the app developers were off on their own developing this fantastic feature. And somewhat the same, I think for the, the Hike Finder map, there were, there were developers map who, who knew Mapbox, which is the particular technology that provides those maps that were able to develop, you know, it was a designer and the people, the Mapbox people who were able to create just exactly how that map would look. And then it just got plugged into the Plon environment. And there, there it is. Yeah. And I think that WTA is benefited from, there's some, you know, developers, there's vendors out there that they like the mission. And so a lot of the tools that they bring to bear are ones that we may not have had as easy access to, but Mapbox being one of them, our relationship with them is great. And I think we've learned some things about how to deliver that sort of visual mapping content in ways that it's, it's because they're kind of psyched on WTA. And that's, that's a good place to be in, especially when you don't have, you know, $50 million to throw around, we're a nonprofit organization. And so those relationships mean a lot. Yeah, fantastic. Do you want to say anything about your, your working on a new feature that I haven't really seen at all myself, this hike recommendations, you want to say anything about that? Yeah, we've, we've, this is a project that we've wanted to do for a long time. And we, we finally have the budget to work on this year. And we're starting to put in place a recommender again, it's built a little bit. We're having, we have a little bit of volunteers working with our phone developer. And we're working on taking that high database of hikes and make some recommendations to people with the goal of really helping people find sort of the next, after the, you know, the first few trails that you hike in Washington often tend to be the same 10 trails, you know, the word of mouth trails that you hear about, they're great and they're popular for a reason, they're wonderful trails. And then you're like, okay, I love hiking, I love trails, I really want to get out, where do I go next? And when you're looking at a database, even if it's a map and a really good searchable tool of 3000 hikes, it can be really overwhelming to know sort of what to do next. And so helping people sort of find the next step or the best trail for them through a recommend, through a recommender, you know, and also thinking about the impact, we've just had a lot of impact on public lands. And there's a lot of pressure on public lands, there's, you know, their population is growing here and population centers, which is great because it brings so many people to also to care for the lands that we have. But thinking about helping folks find trails that maybe are a little bit less well known, or that can handle a little more foot traffic. And so really taking into account that piece as well, and teaching folks about the, you know, the bounty of the state, there's a lot more trails than just, just the few that people know right off, you know, really right off of the highways, there's, you go another mile or two in any direction, there might be, you know, five more trails that have a lot to offer. So both location and also like, you know, a few of waterfalls, you may not know that there are all these other incredible waterfall trails. So really helping connect people in a different way. So we're excited about that. We're hoping to launch it this summer, sort of a beta version and see how it does and see how our math and our algorithms stack up. Awesome. And I think that's another example where this very specialized technical, technically skilled volunteer data scientists, specifically you're working with on that recommendation engine for the, for the hikes, which will be then integrated within Plon to do that work. Really cool. All right. Well, that was a wonderful overview of what you all are doing. Any final final words before we close out here? I mean, I think, you know, the thing that's just been great about the site and Plon is really just helping us, you know, the information that we're sharing is one thing, but really connecting people to the purpose and protecting lands and enjoying them and the benefits that come from that. And I think what's been wonderful is just, you know, seeing that through line, not getting too caught up in helping really refine like, what's the goal here and all of the tech projects. It's really about connecting people to places, you know, on every level and that, that has, that has come through in all of our projects with, with you and, you know, as we work. You know, I'm always extremely inspired whenever I go to your website for whatever purpose, it's easy to just get sucked into the amazing photos and places that showcases. All right. Well, thank you so much for sharing this information for our, for our world Plon Day celebration. Thank you. Happy Plon Day. Exactly. Bye. Bye. Bye.
Loren Drummond, Dean Ericksen and Sally Kleinfeldt discuss the important role volunteers have played on WTA's website.
10.5446/55557 (DOI)
Hello, I'm Sally Klangfeldt from JazzCarta and I'd like to talk about collaborative data management. Scientists and researchers of all kinds produce data, data which needs to be managed. Project data often start out in a spreadsheet or access database. If the research team is big enough, the next step is often a web database with a user interface for data entry and reporting. That solves the data management needs of many projects. However, when different team members need different kinds of data access, simple data management is not enough. For example, some team members might enter data while others review and publish it. On a botany or ecology project, some people may be responsible for entering field data and others for entering lab data about the same organism. Some data fields may be so sensitive that only a few people are allowed access. An example of this is the location of a rare or endangered plant. And these kinds of projects often need other collaboration features as well. The team may need to share things like PDF files, photographs, calendars, and general information. Such projects require more than data management. They need a system that also supports collaboration. I call this collaborative data management. There are no off-the-shelf collaborative data management systems, so the question is, how can we create one? As a general rule, data management systems can be pretty simple, but collaboration systems by their nature are more complex. They need to manage user groups and roles. They need to provide workflows so information can be private or published. And they need to provide fine-grained access control, which can be based on location, role, workflow state, data type, even individual data fields. Because of this, adding advanced collaboration and access control features to a data management system would be a lot of work. But adding data management to the right collaboration system can be straightforward. What is the right collaboration system? Plone is the perfect collaboration system to add data management features to. Plone is rich in collaboration features. Out of the box, it provides member roles, workflows, fine-grained, placeful access control, permission-sensitive search, and many other features. And it's very easy to define and add custom content types to Plone, which are perfect for capturing project data. In Plone, custom content types can have any number of data fields, extensive vocabularies. There can be many different types of fields, strings, integers, rich text, images, even geolocations. Custom types can even act like a folder and contain any number of files, images, and other content objects. Let's look at an example of a collaborative data management system built with Plone. The North American Orchid Conservation Center is a coalition of organizations dedicated to conserving North America's diverse orchid heritage. As part of their conservation mission, they're developing national collections of orchids and their seeds. The collections are managed at collaborating institutions where analysis, preservation, and propagation work takes place. Lots and lots of data is captured about each orchid in the field and in the lab, and different groups of people perform different tasks. Collaborating botanists enter data and submit it for review. Addnance, review, and publish the data. Then other collaborators can see some of the published data, but not all of it. Whereas anonymous users can't see any of the data at all. In addition to the data, everyone needs to share project documents, news events, and other types of information. Let's look at the results. With minimal development effort, JustCarta provided all the features Mayak asked for, plus a whole lot more that just came out of the box, on a very simple Plone site. Here's what the site looks like to anonymous users. They can see the homepage and some public reports, news, and events. And here's what the site looks like to an admin user. Notice that additional sections of the site are now available on the top navigation menu. The section containing the orchid data. A section where they can search for other users and see their shared information. And a section for collaborators to share documents, events, etc. You can also see a toolbar on the left, allowing them to add, edit, and publish content and perform other administrative functions. A big thank you to Mayak. They funded this work as an open source project, thus making it available to other organizations that wanted a similar system. This kind of easy collaborative data management is just one of the ways Plone is a great tool for scientific research. If you browse the playlists on the Plone CMS YouTube channel for World Plone Day or the Plone Conferences, you'll find other examples of scientists and researchers taking advantage of Plone's rich set of features on their projects. And feel free to reach out to me if you'd like more information about NEOX Collaborative Data Management System. Thanks.
Sally Kleinfeldt describes why some research projects need both data management and collaboration, and how Plone perfectly fills that need
10.5446/55558 (DOI)
So I'm going to talk about debugging your clone site. This will be more than just clone. I'm going to cover the basic debugging process in general. Disclaimer, I do not plan on eating bugs like this cardinal in this picture. So don't be expecting that. You can find me online at CDW9 most places. I am a Python developer at six feet up and I'm also currently the clone foundation president. If you want to learn more about me, you can check out the clone podcast that six feet up puts out. I'm featured in the latest episode that was just released yesterday on a streaming service near you. So bugs happen. They can be discovered during development through automated or manual testing and by your users in production. So I'm going to cover basic debugging steps that can work on any website and then dive more specifically into debugging clone. So I've broken up the debugging process into three main steps of reproduce the bug, find the cause and then fix the bug. Finding the cause can sometimes be the most time consuming part, especially if the issue is not immediately obvious. It sends you off looking for, you know, trying to figure out what piece of code is actually causing the problem. The first step is to reproduce the bug. So if the bug was found by someone else, the first step is to see it for yourself. This allows you to confirm the behavior and help you to make sure that the bug is actually fixed when you put in the code to fix it. So reproduce the bug. First of all, make sure it is reproducible. It's possible it might only be a problem for the person that reported the bug, such as if it's a browser plugin. You know, I've, I've had it before where I was working on ads for a website, and then our QA engineer came back and said, well, the ads aren't showing and it turned out it was a browser plugin that was blocking them. To go with that, it might depend on the browser, you know, if it's only happening in Safari. So the next step of understand how it's reproducible, it might be something that is only happening in a specific browser. It might be something that is happening sporadically like maybe it's due to slow page load. It could be the JavaScript ordering if you have some JavaScript that is asynchronous that should be, you know, you need to make sure things are happening in a correct order. It might be a specific instance so like if your site is load balanced across multiple instances, maybe it's only one of those instances that's having a problem, in which case, you would see it, the problem on refresh sometimes and then sometimes not. Also be careful because users or other developers when they report the problem they may make some assumptions, don't mistake those for truth. Correlation does not imply causation. You know they may come up with an idea of why something happened when that may not actually be the case. If you can run the website locally, fix the bug this way. So, let's say you see the problem on production and produce it reproduce it on your local instance before fixing. If you can't reproduce on your local instance will then, you know, check on production. There's a lot of going back and forth on with this whole process sometimes. But it is very important to reproduce it locally, because if you don't do that first and you don't know if the code that you put in actually fixes it. You may start up your instance writing code to fix it and go look and hey the bug isn't happening well it never actually was happening so your code may not be correct. So make sure as you're testing locally test against the copy of the data. If you are able to grab the data from production and use that locally, that's going to be able to help you to debug the problem as well. You also don't want to go into bug fixing blind. Unfortunately, sometimes this is necessary where you can't reproduce a problem locally. And so you write some code that hopefully will fix it, and you don't find out until after the release. You can do that if you can, but it's not always easy to do. So once you have reproduced the bug. Next step is to find the actual cause of the bug. Sometimes it's really easy to know what the problem is as soon as you see it and you don't have to do this step at all. But then sometimes you also need to go a very through a very thorough process of figuring out what is actually causing the bug. So what you usually look would be the logs, or if your site is hooked up to century or look at the JavaScript console. Keep, keep in mind that whatever problem you're seeing on the website, you know, it may not be an actual error being thrown on the website itself, but the logs may offer some help for this. Not all and not all issues throw an error in the logs. This is just the first place. I usually go look to get some information. Next would be figuring out if it's an issue with the data that for the content, you know, if you're using a content management system. You know, is it a problem with something the user entered, or is it actually a problem with the code. So you can test the site with and without data and see if the bug still happens. Knowing which one it is, whether it's data or if it's a code issue is going to help determine your path forward from this point. Next step would be to use the Python debugger. Check out Philip Bowers training on Monday because he did a really in depth training about using the Python debugger. And I learned some things from it. It was really awesome. I also wrote that if you're using Python three seven and up you can put a breakpoint into your code called breakpoint. Otherwise before before Python three seven you have to do import PDB and PDB set trace. And then you can do web inspector tools. You know this goes back to checking the JavaScript console for errors. Even if you didn't do anything with the JavaScript there might be something in the site that's causing problems and it's preventing other JavaScript from loading or events from firing. So, was this introduced by a recent code change. So if a release was done to the site in the last week, you know that code should be suspicious. So review the recent commits that were included in that release. Also clear the cash maybe cashing is actually causing the problem. So release updated dependencies. Check the issue trackers for those for those add ons and also their change logs. Also don't rule out the possibility that the bug might be in the add on itself, or in in core code even so like if you are using clone or Django, you know it may not be the custom code that you built on top of it it could just be in the core code. So it's very likely that you will get stuck at some point while trying to find a bug. And when you feel that all is lost here are some places that you can turn to the first option rubber duck debugging. This is explaining the problem to a rubber duck. I know I've seen some pictures of some of you already that have a rubber duck sitting on your desk, so that when you get stuck you can explain the problem to the rubber duck. So if you have a rubber duck you can talk to a dog. If you have a dog. I don't recommend a cat because they'll probably just walk away, or some other random thing on your desk if you have something. But next would be to ask colleagues to pair program. Don't spend hours banging your head against the wall, you know, do what you can to find clues, and then turn to your peers for help and don't be afraid to ask for help you know I hope you all working in the next place where everyone wants to help, and isn't going to look down on you for not knowing what the problem is here I mean we all miss things sometimes it's just a simple typo and having a second set of eyes is super helpful. And you know if you're completely stuck that the other person you pair programming with may have had similar issues in the past, and can provide guidance, you know if it's something that they've run into before they may remember oh yeah all you have to do is this or it might be looking up to see what they had done for that before you know it's not always easy. And while you're pair programming looking at the problem. The other person may think of new things to try that you haven't tried out yet while trying to track down the bug. So you can send an email or post on a community forum, you know you don't get answers as quickly this way. But sometimes you'll find like you just start writing out the problem, it's kind of the same thing as explaining it to a rubber duck, but this time it's just writing it instead of speaking it. You may start writing out the problem and figuring it out before you actually send the email or post on the forum. I have done this a few times myself. What you can do is just walk away and come back to it later, do not underestimate this, especially when you get to the point that your brain is fried you're so tired of looking at this trying to figure it out. Staring at the problem is not going to uncook your brain. So, go take a walk, come back to it tomorrow, sleep on it. Who knows if it's something you're thinking about enough you may wake up in the middle of the night and think of something else to try I have done that several times myself. So once you actually figure out what the cause of the bug is the next step is to actually fix the bug, or catch the bug as is the case with this fly catcher doing some fancy moves to catch a bug. Now if you can see that in his beak but there's a bug there. So you know, have an idea of how you're going to fix the bug. But you know it's not always easy to go through the process of doing a quick code fix that you can release right away so think about. Right is there a workaround that you can give the users for now is there something that you can provide as an immediate help while you're going and doing other things of fixing. If the user did something wrong, as far as like the way they entered data or what they did in the site. Is there something that you can put in to include in your fix so that it will expect that behavior or you know prevents the actual error from happening. And it's best to do any fixes that you can in the code instead of expecting the users to do things the right way. Make sure the whatever fix you put in place disrupts the users as little as possible. You know you don't want to expect the users to have to do something after the code fix is in place, but do whatever you can in the code to be able to help them. Also when you fix the bug make sure you update the tests. This is going to help you to make sure that you know this bug doesn't happen as a regression in the future, and it's also going to make sure that you have complete code coverage. And after you fix the bug there's some follow up that should go with that along with that as well, you know do what you can to help your future self. So, if there is an issue that is attached to this particular bug issue or ticket or something, you know make sure that's updated, especially with information about how you fixed it. If you committed code make sure that there's a link to that commit in the issue. Also update the forum post or your collaborators with your answer. Especially on the forum post, you know if you added a question, you may have got some responses that didn't help. Or even if they did help make sure that you add a comment saying exactly what fixed it, because this is going to help lots of other people to that run into the same problem. If they come across that forum post then they may find the answer. If you did any pair programming with someone else. Let that person know what what the fix was because you know as developers were very, very curious we want to fix the things and so we also want to know what the fixes were if it was something that we, we helped you with on a problem. So, you can submit a pull request so if you did find that it was a problem and some core code. Maybe you put in a workaround in your custom code, but also make sure to submit a pull request to the core code as well. Now I'll talk more specifically about debugging in clone so this goes along with the steps that I've already outlined, but these are going to be some specific places that you can look inside of a clone site that will help you figure out what is causing your bugs. So first would be to check the arrow log. So you can see the errors that were logged in a site without logging into the server. This is in the ZMI and error under log. And you can see there in the middle that certain exception types are ignored so unauthorized not found redirect. And in this case you can see I commented out not found. So now when if there are any 404s in the site they get logged. You can see that along at the bottom there. So this can help you, you know if you're trying to track something down. I have found that sometimes commenting out these ignored exception types can help. But not always. Keep in mind here though that the error log is only going to show the errors on the current instance. So if you have a prod site with multiple instances running that are load balance. You know you're only going to see the errors for the particular instance that you hit at that time you know if you're just going through the production URL. And so as you hit refresh you may see different errors, or if you try to click on one it's going to seem like it has disappeared once you click on it and get in there. So at this point you can log directly into the server and actually look through all the logs of all of the instances. You can check the undo log. This works best when it's a change that someone had made in the site recently. You know, if it was something that you know happened a week or two ago, don't don't even go to the undo log unless you know it's a site that no one touches very often. So if you have versions of soap this is a tab that's at the top of the ZMI, and it changes with whatever with the context so you can drill down further in the site and click undo and see the changes that happened at that level. But this is not in the case in soap for. So, you can see the path at the top of my screenshot here that if you go to control panel, which is there's a link to this in the upper right corner of the ZMI know, go to database and then main. And you can see the tab at the top to show you all the other things that have happened in the site. You can hit the server specific port directly, like a specific instance. And with our buildouts we have an instance debug instance available so this isn't running with the regular instances. It's just an extra one that's available that can help us to debug issues and in production if needed. Like I said earlier you want to do debug locally as much as possible but sometimes you do have to do it on on production. So here's an example where I have an instant debug script that is running on port 8089. So I can SSH to that port directly. And then run instance debug in foreground mode that is the FG. So that's going to show you everything output to the console of exactly what's going on. And so I mean you can even throw a PDB in some code. As long as you don't restart the other instances at this time. To then see what what problems you're running into in the site. And you can view the site like I have the link there. So in this case it would be local host colon 8089. But even though it says local host don't be fooled remember that you are on production and don't go make changes. At least not ones that you don't tend to clean up later. So you can export a section of the data as as EXP. So, especially if you're working with a production site that is huge like it has 50 gigabytes of data you don't want to pull that down to your local instance. But if you know that the bug is happening and one specific little part of blown. You can export just that folder where you're having the problem and import it into your local instance as long as the local instance the add on everything that's installed is the same as the production site. So if you want to do this from the ZMI, you can select the checkbox for the folder that you want to export. And at the bottom there is an import export button, which it'll take you to this screen. So at the top it has the export section it's telling you this is the folder you're going to export, and you can say download to local machine. Last I check this only worked with one folder at a time so don't try to select multiple folders and do an export I think you have to do one at a time. And to import it into your local instance you'll go to the build out inside of the var parts instance import folder. Throw it in there, and then you'll go back into the ZMI on your local instance this time. Click that import export button again, and you'll have a drop down that shows that ZXP that you put into that folder, and you can import it. So you don't have to import it into the same, it doesn't have to be at the same folder path as it was on production you can throw it in anywhere. Some add ons that can help you with debugging. And once again this is for only for your local instance don't install these on production. There's plonaut debug toolbar that offers some context information through the web. Collective that recipe that I'm lit that can go inside of your build out and it's going to provide inside of the parts folder and omelette that basically it just points to the eggs. So the exit the site is using the specific version that your site is using. So you can even, you know, open those up, look at the code, throw a PDB in there may even make some changes but always remember to remove those changes when you're done so that it doesn't affect the site later, or other sites if you're using shared eggs. There's also a plonaut reload that allows you to reload the Python code through the web without having to restart the instance. And then products that PDB debug mode. Talk about this one more specifically. This offers a postmortem debugger so that as you're running your clone site locally. If there's an error with something it's not just going to throw it into the log. But if you're running the site in foreground mode, it's going to put you right at a PDB prompt right where that error happened. Now which is super helpful for you know seeing everything in context and figure out what the problem actually is. It also offers an add at PDB browser view so if you're looking at a page. And maybe there's not an error on the page or if you just want to see information about that particular object you're looking at, add add at PDB to the URL, and it's going to put you in a breakpoint in your console. And at this point you can view that object so you'd be self context, and you can see everything that's attached to that object, and it's a full debugger to so if you want to import the clone API and do lots of other things just to test around. You can do all that right there. So in clone five you can also add breakpoints to your templates. So that's this code that I have here. If you're using Python three seven and up you can change that to calling breakpoints that have importing PDB. But this works very similarly. Instead of self context it's going to be that context variable that's there. The context is a dictionary that's going, it's going to have all of your templates variables. So, if you have a bunch of variables defined within a template put this breakpoint after that, you'll be able to inspect what all of those variables are. If you are using clone classic. This is, this would be like if you have Barcelona to dios theme installed. I recommend disabling the theme, you know, maybe you're not sure where the bug is coming from, you know, it could be something in your theme you're not sure. So disable the theme, go to the theming control panel found insight setup, and switch the theme back to Barcelona Neta by clicking the activate button, and then see if you can still reproduce the problem. Also insight setup is the resource registry control panel that you can turn on development mode and this is going to turn off the, the minification and the combining of all of the JavaScript files. And this can be helpful if you know there's a JavaScript error but you, it's not clear what what file it's in this can help you figure that out. So if you can't figure out where a problem is coming from start removing pieces so like I said, on the last slide of removing the theme. Next things that you can do. Test in the same site without data, make sure you're testing against the same code base just without the production data in place. You could try to create a new clone site at the same zoop root. Especially if you don't have all of the add ons installed maybe install add ons one by one, and see when the problem starts happening can also test on demo dot clone dot work I go here a lot. It's kept up to date with the latest version of clones right now it's on five to soon and soon there will be demos for for clone six, both using the react front end Volto and clone classic. So the portal catalog has helped me a lot, not necessarily with finding buns, but like let's say I have a bug with a particular content type, and I don't know where all of those objects live in the site. I can go to the portal catalog, the indexes and look by portal type and then click browse tab at the top like it's showing here. And so that's listing out where all of those objects are split up by content, and by content type. And this has also been helpful for me like when I'm going, looking at upgrading a clone sites to Python three. If the site uses clone form gen forms, I want to know where all those forms live in the site and how many there are before I start doing a migration. And then also you can check online like I mentioned earlier, posting in a forum for a clone that is community that plan that work. So I look in the issue tracker. If you know exactly which add on is causing problems you can go look at that specific issue tracker otherwise the general one for clone is in products that CMO clone. I want to give you some examples. So these are some actual issues that I have debugged recently, just to give you an idea of the process that I followed to figure out what the causes were. So example number one I don't have a whole lot of screenshots here so hopefully my, you can picture my description. So we have a client that has an internet that uses products that web server off to automatically log the users into clone. So if you're upgrading from clone 514 to 517 that automatic login stopped working and go to the to the clone site and I would not have the edit bar I just was not logged in whereas every other time I always was. One difficulty with this was that web server off only works on the server so I couldn't debug this one locally that made things a little bit more complicated. So I didn't know anything and the web server off repo reporting any issues so I didn't know if that was the actual cause the problem or what was going on. We spent some time figuring out which piece was no longer working, and I didn't do all this by myself but also my colleague Anthony had done some investigating as well and he found that the remote under user header was missing. We thought maybe it was an issue with Apache that just wasn't bringing everything everything through, and we don't have full control over the server that we're working on so we thought okay maybe someone else made changes that is affecting this here. Eventually we were able to confirm that the issue was related to the upgrade it wasn't the problem with the server. Okay next step is then determine if it's some custom code or if it wasn't add on. Like I said we check the add on for issues nothing had been reported but it's also not used a whole lot anymore. But I was convinced this was due to the upgrade we didn't have any other custom changes as part of the release. So at this point it was like okay which version pin that changed is the one causing the problem. I looked through all of the release notes for all of the steps plan 515516517 I didn't, there was nothing in there that really jumped out at me, maybe I missed it I'd have to go back and look now. So what it came down to was starting with the clone 514 pins and gradually changing all of them to 517 and I did this like just a grabbed a chunk of pins at a time move them over. Check to see if the problem happened until I finally narrowed it down to zoe. And once I knew that it was so I went to the repository and I found this commit. Prevent header spoofing via underscore dash completion. So there were security changes inside of zoe. That are what actually broke the add on so since web server off depends on the remote under user header like it has an underscore in it. So was now ignoring those changes that web server off was putting in an order to log people into the site. So my fix for this was to pin the version before this change. It's not always a good solution to use a version that does not have security fixes but we determined for this particular site since it's only an internal Internet that's only accessible by VPN, but it would be fine for the site. Number two snippets were not always loading in some cases so we have a client that uses the add on you watch that snippets. This allows admins to add a reusable. What we call an HTML snippet which is actually a custom content type on the site. So the add script code so like admins can add this JavaScript code and then editors can insert this snippet using the wash desk snippets into a separate page. So the editors aren't actually the ones putting on JavaScripts they can't add anything malicious. So here's an example on the department's outreach dual credit page. You can see under there at the bottom there's some JavaScript code, because this is what normally the snippet snippets would turn this into HTML would render it and a form should display here, but it wasn't in this folder. And they so what they reported to me was that the snippet was not loading on the default page of a of the default view of a folder. So that URL at the top, the dual dash credit it was not loading there but they did notice that if they went to the page directly, then it did load. What they reported was that the snippets were not loading on the default views. I checked I confirmed the behavior. I then tested locally I set up a folder that had a default view and the snippets were loading for me. I investigated a ton I found ruled out JavaScript issues I was pretty sure this wasn't actually related to any JavaScript errors or anything. I did happen to notice though that the snippets would load if I added a trailing slash to the end. And with a couple of these I thought okay maybe it's has something to do with the way that the web server is set up so I kicked the ticket over to one of our sys admins to check things out. And he went in he hit the port directly and he saw the same behavior. Eventually though I realized that I could reproduce the problem locally if I use the exact same ID for the folder which I hadn't done the first time. And because I was so focused on that being a problem with the default view of a folder. I didn't think it had anything to do with the ID. And that's where the trick was, because the default views were not the problem in this case. So now at this point I dug into the US that snippets code to figure out how it does the transform. And I found this at the beginning of the transform function that run, you know, renders the HTML. And then the request path info ends with edit return result. And if you recall, the URL was dual dash credit. So in this case that's true it ends with edit. And then it just returns the snippet without actually rendering it. And the actual problem the fix then was to change this code so that it ends with slash edit, because we didn't, you know, we don't want the snippets to render on the edit view, but on the actual view. So that was the fix there. And example three. The first form was not displaying in the navigation portlet. So, in this case the client reported the problem was that they had a folder has a navigation portlet off to the right they had a couple forms, and one of the forms was displaying in the navigation portlet but the other one was not for some reason. And they ended up being able to solve more easily just because of my experience and knowledge with clone, but I found that this one was still very interesting to to dig in and figure out. So they had checked and saw, okay the exclude from navigation option was not selected. So it should be displaying. I checked and looked I saw that you're the same behavior forms not showing up. I double checked the exclude from now was not selected. I checked the check box there. So at this point I exported the section of the site as is the exp so I could work with that locally and my local instance. I thought maybe it was a problem with the navigation portlet itself. And I looked at the object directly and saw, okay, this is the case, but it's not selected in the in the UI. And so this one took some historical knowledge of clone to exactly understand what was happening. And looking at the source of that form, you can kind of see that there. You see the ID and name exclude from now but it's using camel case. At one point in clones history they changed this properties I value or ID from the camel case to the snake case. And this form was still using the camel case. Also this, I didn't know to mention this is a product step clone form gen form that is still using archetypes, whereas everything else on the site was dexterity. I did eventually realize that if you check the box for exclude from navigation on a clone form gen form, it recognizes that and it will hide it, but when you uncheck it, it doesn't display it again. I haven't actually dug in to figure out exactly where that part isn't working. But I know that this is the problem. I just manipulate the object from the command line to be able to change that value. And that was my talk. So, thank you all for joining. I hope you learned some good debugging tips.
Bugs happen. They can be discovered during development, through automated or manual testing, and by clients in production. I will cover basic debugging steps that can work for any type of website, then dive more specifically into debugging in Plone.
10.5446/55564 (DOI)
Hello everybody, my name is Paul Roulant. I have been active in the Plum community for a little while, mainly on documentation, on accessibility, and also in the board of the Plum Foundation. But that's not all that I do. I have a day job, some of you may not believe it, but I do actually have a day job, and I also have a hobby. And one of my biggest hobbies, and certainly the most time consuming, is organizing a yearly film festival, which I have been part of ever since it began. That film festival is what I'm going to speak about today. I'm also going to tell about how Plum helps a film festival. The festival is called the Rozer Filmdagen. It is Amsterdam's longest running LGBTQ plus film festival. It started in 1994 already. And this year, under the current strange pandemic conditions, we've had our first fully virtual one, which was sort of nice, but also sort of strange. But still, we managed to stay alive. We managed to keep people interested. And we managed to show movies to people, which is what it's all about. So yeah, this was the 24th edition already of this film festival. And so next year, we're hoping to go big and real life, life again for our 25th anniversary. But yeah, let's first see how we did things this year, and how Plum helps a film festival. I'm going to warn you, this is not going to be extremely technical, because what I wanted to show is how basically everything that we use to get a successful film festival website is mainly using box standard blown, but very few add-ons with just using the things that are out of the box, using some add-ons, using some integrations with other software. But mainly, I want to emphasize that blown out of the box comes already with so many tools that really help even smaller sites. This site is, yeah, it's small. It does, of course, have a lot of film screenings every year. But it's not huge. But blown actually works really well for these kinds of use cases as well. We use it on quite a small virtual server. It has absolutely no budgets. It has absolutely very little time spent on it. And it works. And people appreciate the website. We do customer surveys every year. And our audience is really happy about the website. They can find things. They're quite happy about it. So what do we do? Well, here you see our website. This is the front page. This was, of course, the online edition. Unfortunately, the year before, we actually had no edition. Well, we had one later in the year because we were supposed to open on the 13th of March 2020. And the Dutch government shut everything down on the 13th of March 2020. So literally on the day we were opening, we had bottles of champagne lined up, lots of food, finger food, guests coming over. And then we had to shut everything down, which was quite shocking, quite annoying. We had guests that couldn't go back to Puerto Rico and Bolivia for months because everything was closed down. We did manage to get rid of the champagne. Obviously, we drank it all. But yeah, it was sort of a shocker. We were afraid that we would probably not survive. But thanks to some generous subsidies and generous policies from our sponsors, we did survive. We did even manage to hold the festival in summer when things were a bit more relaxed here. But this year, it was quite obvious that we needed to go online. So this is a basic blown site. Basically, the add-ons that we have here are very simple. It's mosaic. I love mosaic. Mosaic, for those who don't know, is basically it allows you to have blocks, various blocks so you can make slightly more complicated landing pages. And we use something called, I'll put it here, in action. We use something called an isotope view, which basically makes nice, self-organizing lists of content items. So if we go to the program, you will see all our programming. It's a very long list. But let's say you are not interested in everything. You might just want to be interested in the shorts. And then you click on here and then you just have all our shorts programs. Or you could say, no, I'm a big feature film fan. Give me all the features. And then you might be interested in, let's say, you're like, well, still too much. Can't make up my mind. I want to have all the lesbian themed features. Well, you click on it and there you have all the showings of all the lesbian themed feature films. So it makes it very easy for people to slice and dice. What we also do is, yeah, let's go back to everything. One of the most useful features for a film festival and for any organization that deals with timed events is clones built in support of the icon format. You just have a link, which is box standard and clone. You click on it and it gives you a calendar feed of all showings, of all film showings. And people can import that in their Google Calendar or in their Outlook or in their whatever they're using. And people really like that. It's being used quite a bit. But what many people don't know is that you can actually also import icon files. And that for a film festival is extremely useful. Because unfortunately, for a film festival and especially an online film festival, you cannot just put your festival or your screenings, which is the most important part arguably of our website, you can't just put it online in one place. There are several places where you need to basically replicate the same information or more or less the same information and at least the timing and dates and the titles. One of these is the ticketing system, which I'll come back to a bit later. Because a clone does not come with a built-in ticketing system, you need to have a ticketing system to sell tickets for film festivals. And that also comes with various requirements, such as it needs integration with a payment provider. And there's lots of things there. But yeah, it also needs those same dates and times. And then you also need a media distributor. We partnered with Cinecent, which is one of the biggest names in the film industry. They're Canadian, they're proprietary, but they're really good. And you need to go with a proprietary film distributor or screen provider, because you cannot just screen films everywhere. You have to do geo-blocking. That is something that film distributors require of you. You cannot say, oh, everybody in the whole world can now watch my films, because the film distributors will just say, nope, not happening. For each country, they have different distribution deals. So you need to do things like geo-blocking. You need also to have limits, which for reasons unknown, you also, even though we were completely online, you do not get unlimited screenings. We actually had to set limits per film. A film distributor will say, well, you can sell 500 tickets for this film, but no more, or 300 or 200. That depends what kind of deal you strike. So that has to be checked. So therefore, there needs to be DRM in place. There needs to be lots of things in place. So you need to partner with a streaming provider that can guarantee all that, and that makes the film distributors happy. So Cinecent provided that. But Cinecent doesn't provide a ticketing system. So we had to either hire one, which costs quite a lot of money, or you can rely on open source, which is what we did. We actually found a really nice one, which is called Pre-Tix, which is based on Django. It's fully open source. So you can host it yourself. You can make it whatever you want. And you can link that up while we actually had the link to Cinecent programed by the people of Pre-Tix and Open-Sourced. That's that thing that we funded, so that now other film festivals can use it as well. But Pre-Tix is really nice. You could also sell things for conferences, for theaters, for everything. It's a really nice piece of software. But as said, those three things all need the same dates and times. And one of the most annoying things about typing in 100 screenings is getting all the dates and times exactly right. So we decided to put them in one of the systems. And we chose one that could export it. We originally put all the dates into Cinecent, because that was the iris ones that we had to set up. Then we could export iCal files. We could import them in Pre-Tix. So we then had all the same events, the same dates and times in Pre-Tix, in the ticketing system. And we could set prices and other things for it in there. And in Plone, you can also import the same thing. So that is really good to get your basis of all your screenings right. And that is really simple. You go to any folder. And under Actions, no, it needs to be in a folder. Let's go to a real folder. Actions. Here it's still on. Let's disable it for now. And then we can enable it. I can enable iCalendar import. And if I do that, I can say which type of event it should be imported in. In our case, that would be screening. I'll get to that a little bit later. And then you can either use the URL or if you have it as a file, you can just browse and keep it in here. So that is how we imported all of the events that we had. And that gives you a really good start because then you have the title, you have the dates, which is the most annoying part, all there. And then you, of course, still have to type in the descriptions and trailers and other things. But it means you can be quite sure that all your events in your website, your ticketing system, and in the streaming provider are actually happening at the same time and at the same date, which really helps. So what happens? Yeah, this is screening. So it has some metadata. It says where, when and where it will happen. Well, this was actually, our shorts were able to stream on demand during the whole festival. So it doesn't have an exact date. The type, the location, which now, of course, was always online, but normally we use different cinemas. So here it would say Cinema 1, Cinema 2, Cinema 3, what type of audience we expect. And an iCal download for that specific one. How does this work? Well, this is basically a content type. And it's just a direct copy of the event content type. So we have external content types. And you just use, you copy the event and you say this has become now a screening. And a screening is basically an event where you take on some extra behavior. You say, I would really like a lead image because lovely, lovely images, always handy. And you can define your fields. You can basically use clickity click for this. But I usually just edit the XML file because it just, I'm that kind of person. I like to see what I'm doing. So here we added the day in which it would happen because then you can sort on it. You can add type. I hope people can see this. I should probably try to. So you can have some extra, you say what are the types, feature short documentary, especially then so we can talk. Yeah. And those are, that's basically it. It is really simple. Oops. What more does a film festival need? Well, a film festival, of course, also needs forms. So we use easy form. And we use that mostly to get entries from filmmakers. So that's just every year we put up a form and say, like, okay, dear directors, this is what you can do. Or send us your material and we will look at all the films and we will select the ones that come into. That's handy because you can export it. It's really practical. So what more can I say? So yeah, let me repeat again that that I call import is super handy for a film festival. What is also handy is this isotope view because you cannot, you can use it not only for not only for screenings, you could also make sure that your listing of news items looks a little bit less boring. And as said, this is all done. Basically, I make these websites on my own in one weekend. So with 140 films, with all the news, it takes me about two admittedly quite long days. But that's all it takes, which if I listen to my colleague filmmaker, a film festival organizers is actually quite a good going. They usually struggle quite a bit to get their websites up. So yeah, I would say that's a big win. Because yeah, clone makes a lot of things easy. It has the tags. It has basically everything that you want right out of the box. And it comes with some handy tools to keep things organized. Once if you want to move things around, it still finds them. All your URLs are quite logical. So if you go to a certain day, the URL actually makes sense. It says, Rosa Filmbarga program and then the date. So also the URLs that you communicate on Twitter or elsewhere make sense. They're not something weird. Yeah, what more can I say? You can add extra. So here we built a little integration so that people indeed go directly to the ticket system and buy their tickets directly. Well, you can't buy them anymore because obviously the festival has passed. But that was just an extra field on that event type screening where it says give me the direct link to the ticketing system, which we already had imported from the iCal feed. So I didn't need to type that all in again. So it automates a lot of the annoying stuff and keeps you free to do the nice stuff. So now shockingly, I have to admit that this was all built in clone 5.2 because, well, we needed some of the older stuff and we didn't feel quite ready yet. Also because the pandemic year has been quite busy for me. We didn't feel quite ready yet to start experimenting with the new front-end photo, which will become the default in clone 6. But I am actually now looking very much forward to that. Some things will change. So I'll have to learn and I think everybody will have to learn what will be needed for me to make the jump to clone 6 fully is that we need to have forms. Something like easy form, something like whatever, a kind of form so that directors can submit information to us. And we also need forms in a few other places where people order t-shirts and other stuff. So forms, that's a big necessity for us. And I think for a lot of sites, you need to have some kind of form. Don't care what it's called, but we're going to need that. Blocks, of course, are already there. We definitely need some kind of full embed block where I can embed raw HTML because, well, everything needs a trailer. Otherwise, yeah, we just need to embed raw HTML. That's something that we absolutely need. But then blocks are basically what Mosaic is doing now. I do still need to look into if we can find some way to have this behavior of isotope replicated. I'm sure that the EEA people will come up with something like facetive navigation. But isotope was a little bit simpler. So maybe, yeah, I'm pretty sure there will be some react thingy that does that. That says, okay, give me only the films on this day, and then it sorts those out. And what I, of course, because yeah, clone 6, even though the front ends might be new and shiny, the back end is still going to be reliable, trusty old clone, which means hopefully I can still import and export my iCalendar files. And I hope I have conveyed to you now enough that for iCal is actually a super useful format if you do events. It's both the export, which you do for your customers, for your audience. But the import is what keeps you sane during setting up all those events. You have one sort of source of truth, which has the dates, and you import it into all of the other software stacks that are necessary to keep the film festival rolling. And then you know that the dates will actually be the same, which funnily enough is one of the most difficult things to do. If you type 140 different events into three different systems, you're going to make mistakes. So if you only do it in one, it's all ready in hard enough to get it right, and you need someone else to check it. If you do it in three, you just multiply your chances of making mistakes. And the clone, I don't have to, because it just imports it for me and then lays out the structure. So I'm looking forward to that. I'm also really looking forward to having a proper film festival again, because this year, as said, we had many guests who were happy. We had over 12,000 people attending or seeing a movie from their home. But it's not quite the same as actually seeing them in the cinema and laughing and crying and joking and talking about the movies afterwards. And so for our 25th anniversary next year, I sincerely hope that we will be able to have a real film festival with real people in real cinema. Again, we will probably keep some of the online component as well, because it did allow a lot of people outside of Amsterdam or the vicinity to attend. But yeah, it's going to be complicated with securing the rights. But yeah, cinema festivals, they do rely on guests, on interaction, on directors being there, on the whole liveliness that comes with it. So hopefully next year, if you're in Amsterdam next year, let me get the dates right. Yes, March 10 to 20, 2022. I sincerely hope that I may welcome you into what has been my longest-running hobby supported by Plone, which is organizing this Amsterdam LGBTQ film festival. Yeah, so if you're around next year, please drop by and see some awesome movies and have some really nice discussions and book them through a Plone site. Thank you.
Paul Roeland will describe how Plone supports the Roze Filmdagen - Amsterdam LGBTQ+ Film Festival.
10.5446/55568 (DOI)
Hello, welcome to World Plon Day for 2021. I believe it's 2021. Oh, yes. I am Kim Nguyen. I'm a still unreformed Plonista. I've been a Plonista for 18 years this year. I got involved with Plon when I was working at a university at the University of Wisconsin-Oshkosh and I needed to build a website for people who could log into a site and retrieve content that was otherwise not available to the public. This website needed to be maintained by teachers, so not programmers, certainly not people who are expected to know or be comfortable with editing HTML or running servers or anything like that. Long story short, I got involved because I picked Plon out of a number of solutions that were available at the time. This was in 2003 and it was so successful. The project was so successful that I was able to convince the entire university to switch over to using Plon. Over the next few years, we built about 350 websites using Plon, a variety of public sites for colleges, the main campus, departments, programs, intranets, workflow applications. It was really quite amazing what we were able to do with this one piece of software. Today, what I'd like to talk to you about is some of the ways that nonprofit organizations who usually don't have a lot of money can use Plon in a way that allows them to not only present a complimentary public face to the world, but also put out news items, put out calendar events of things they might be organizing, so fundraisers or meetings, board minutes, things like that. As you can imagine, the Plon Foundation uses Plon, so Plon.org has all of those elements. Nonprofits, what are the other things that differentiate nonprofits from, say, corporations? In addition to just not usually having a whole bunch of money to throw at a problem, nonprofits are usually run by volunteers. One of the things about volunteers is they can come and they can go. One of the most important things, as you can imagine, in running any long-lived website is the ability to make it easy for new people who are joining the organization to understand how to use some piece of software or some system that the organization wants to continue using institutionally. The ease of use is something that is really critical for volunteers, and so is the simplicity of managing the software. You really don't want to bring a new volunteer into a local city club or a council or a small charity and say, oh, and by the way, you need to log into a linode. You have to use SSH. You have to generate your public and private key pair, and you have to SSH into the machine, and you have to run this thing on the command line. You really don't want that. Plon really hits many of those points. Primarily, it's inexpensive, it's basically free as in beer, and of course, it's really easy to use. Once somebody sets up your Plon site, all you have to do is just log in. There's a login panel. You can log in, and then suddenly you can start editing the parts of the site that you're supposed to edit, and you can publish content, and it's really as easy as clicking a button and saying publish. Plon is really a wonderful piece of software, especially for nonprofits. The example I'm going to talk to you about today in particular is this robotics team that I'm involved with. Our two sons are involved in this team, have been on this team, and I got sucked into doing this because I know some people in town who can get things done for you. Well, they actually knew that I could do websites, and I stupidly volunteered to do stuff like this. What you're seeing here today, I hope, is the website as it looked a few years ago. It doesn't look bad, actually. Wave Robotics is the team name. If I just scroll down a little bit, you can see it looks okay. It looks fine. There's a little bit of news and some images there and some photos and a little bit of news slapped on the front page. So it looks fine. So what are some of the downsides of this particular website? So even though I say I kind of stupidly volunteered to do things for people, I try not to do something that doesn't really need my help. This particular website, as you can see, if I shrink the browser window, does not act the way you'd expect it to act. Certainly it does not look very good on a mobile device or phone or a tablet. So that was one of the challenges that they were aware of. But then probably more importantly, which you can't see behind the scenes here, is that if you look at, there's quite a bit of content on this site. It's not the university website, but there's a bit of history here. There's a lot of things under menu items. And there were definitely forms that the team wanted parents and others to be able to access and they wanted them to be behind a password. So again, the idea of somebody needs to log in and you need to be able to authorize different kinds of users to do different kinds of things. It seems like it's a perennial thing or it should be a perennial thing for a website, and though I believe competing content management, well, okay, I'll say loosely, content management systems, competing content management systems don't really do this very well, but Plone, oh my gosh, Plone does this without even thinking. So that whole role and authentication piece is such a nice part of Plone core and more on that in a second. So the biggest challenge that the team had is there are these adults who've been part of the team and who mentor the team, the students, and they wanted to get students to help them maintain the content on this website. As it turns out, when I happen to say to one of the lead mentors, hey, let me show you this Plone web content management system that I really like and you can tell me if it's at all useful to you. And so I went to demo.plone.org, which I hope you're familiar with. You can log in as a manager and then you can do a bunch of things. And the moment I went to the login page and I said, did you see that by the way? Let me log out again. That was too quick. So when you log in to Plone, you can log in as a manager or editor in chief or an editor or any other kind of category of user or role that you'd like to create. And the moment I showed this person this login page and I explained the ability for us to manage who can do what on different parts of the site, she said, sold, I want this. Because the solution that they had, this thing, which is a proprietary content management system that comes with a calendar and file management and photo management system, just did not have that ability for them to say, hey, student, would you like to help us maintain the website? The only way they could do that was to give the student the same login power as the site owners, as the lead mentors, the adults who have been managing the team from year to year. So as you can imagine, the owners of the site that lead mentors decided, we really don't want students being able to go in and see everything and possibly damage or delete things. And so they just did not involve students at all in the maintenance of the website content. So when she saw the login for Plone and heard that you can create users and you can assign them to different roles and groups and then give different groups or roles the ability to edit different parts of the site, she said, this is really what we need. So I got roped in while I kind of did it to myself again. And so the end result was this Plone site, this Plone 5 site, it's actually Plone 5.2 running on Python 3. And it lets students log in. And I'll tell you a little bit more about that. So as part of the effort to help the marketing of this robotics team, we have what we call a subteam for different parts of what the team does. And so there's a subteam that assembles a robot, there's a subteam that creates the CAD drawings, there's a subteam that manages designing and building of the electrical components, there's a driver, there's a pit crew. So anyway, there's a whole bunch of different subteams, but one of the subteams is all about marketing. And so I wanted to help them get going on what are some of the things that you might want to do if you're responsible for marketing and organization, nonprofit, or eventually if some of these students get jobs in the business world, they might end up in charge of marketing for their company. So these marketing students were interested in learning what it is to create a news item and how to publish news items and what to put in a news item to make it appeal to someone who might not know enough about the team, but might be interested in learning because they'd like to become volunteers or they'd like to, if they're a student, they'd like to join the team or hey, if they had money and they wanted to help the team by sponsoring something like helping to buy some equipment that the students need to assemble some tooling or some of the materials used to build the robot, it'd be great to introduce people to what the team does. So some of the things that we ended up doing is allowing these students to log in to their Plon site and then create news items like this one and edit the news items. And then once we felt comfortable that they were careful and they could word things appropriately, it felt good enough that we also gave them the ability to publish the content without needing one of the mentors like me to review it. Although they still ask for us to review it, it's nice that we can allow students to log in to their Plon site and then in the news item section or in the blog post section of the site, they have the ability to create content and review it and then publish it. So that level of authorization and group and role management is something that has been part of Plon for, well, since the beginning and it's one of the obviously key differentiators between Plon and other content management systems out there. I believe, and I could be wrong, but I believe WordPress is still pretty much like that. Once somebody can log into a WordPress site, they pretty much have the run of the place and it may not be the thing that you want to do. It certainly makes it harder to feel comfortable with the delegation of content management and content editing, content publishing. So again, it's one of the big strengths of Plon that is in Plon Core. It's in the box. So a little bit more about this website. As you can see, I'll just give you a little bit of a tour. You can see at the top here, we've got in the portlets, we've got the Google Calendar, which is what the team uses. It's a shared calendar. We have news, a news portlet, pretty standard Plon, reminded of something that Dr. Evil says, pretty standard really. My posts and then a navigation portlet that shows you what else is in the site. So these are different folders that you can then navigate to. So you can look at the blogs. Pretty standard really. Okay. All right. Is it good? Okay. I'm just going insane here. So other parts of the site, the information about the programs that the team is involved with. So there's different levels of the robotics team here. Anyway, my job here isn't really to promote our team, although if you have a lot of spare money and you'd like to sponsor something with the team, I'm sure we'd be very pleased to put your name up in lights here on this page. So what are the other aspects of this site that are interesting? It's pretty basic Plon. Plon 5.2, as I said, and running on Python 3, it's running on a Linux that costs, I think in this case, $5 a month. Plus we have the backups turned on. So that's another $2 a month. So we're looking at $7 a month really to run this team's very important web presence. And it's the web address that we put on everything. You can see my cap. I mean, actually, I think, where is it? Oh, darn it. There's no web address on my cap. Okay. Well, it should be on my cap, but it's on other stuff. So the website is really important. And you can do things like you can play embedded videos. I don't think you can hear the sound because I didn't share it. But all of this stuff, you just get out of the box with Plon. The only add-on that we have in here is, I think it's easy form. So we can create online forms that people can fill out. They're easy to create. They're easy to edit. But I think in this, this team has used a lot of Google Forms. So historically, they haven't needed to use easy form. So pretty much this is your standard Plon out of the box. The theme is something that I got from a Google Summer of Codes project from a few years ago, and I'll show you that in a moment. If you go to Plon.org and then you go to the downloads, which you can get to from here, from the More menu, you'll see your standard Plon download buttons. And of course, you should be aware, I hope you're aware, that you can use Plon. It's Docker image to get Plon going really quickly on any laptop or any, just run. If you have Docker, you can just launch a Plon Docker image very, very easily. But you can, of course, download Plon. And then if you keep looking a little bit lower, you can see some of the, there were five themes created by one of our Google Summer of Codes students. And the one that I selected is called Future Imperfect, which you can see a screenshot of here. And so this really was something that was easy to install and just select in the Plon site. And that was pretty much it. You can see the theme also comes with this, I'm not sure what to call it. It's an additional navigational tool that slides out from the side and shows you the structure of the site. So now that I think about it, I probably didn't need the navigation portlet. But yeah, there's a couple of little quirks in the theme. But overall, hey, it's not bad for something that took just a few minutes to get going. And the portlets are pretty standard. It's easy to share. Google Calendar actually gives you a little embed code that you can insert into a static text portlet and then display it in where in the column you'd like. So what else can I tell you about this particular site? Once you log into a site, so I've already logged in in this particular window, get your standard toolbar, and then you can see, say, some of the contents. So if I go to the blog folder, you know, it's not this one. It's actually two blog folders. Okay. So we need to do a little bit more work. You can see all the different blog posts that the students have put together. And then if we look in the sharing tab, you can see that the marketing, there's a little glitch here. You can't see the checkboxes. But the marketing students are authorized to add, edit, and, well, and view, obviously, but also review. So there are four checkboxes that should be here that are enabled. So the marketing students can have essentially free range in this folder. In this folder alone, we don't actually let them edit anything else on the site. But in this particular folder and in the news folder, they can. So the students are part of this group. And so it's easy enough to manage group membership standard plan. It's very nice. Okay. So that's fine and dandy. I think I've shown you that it's somebody who's been doing plan for 18 years can have an easy time of provisioning a linode installing. In this case, I usually use Ubuntu and then downloading the clone installer and then running the installer, which is a command line tool, and then setting up clone so it can run on the right ports and then setting up SSL certificates and setting up cloud front in front, blah, blah, blah. Okay. Maybe that's not something that you're run of the mill charitable organization volunteer who's like a parent or who's maybe a business person, not a programmer, not somebody who's comfortable using Linux, installing Linux, installing tools in Linux and so on would be happy to do. So this problem is something that I feel is important to clone to make it approachable. So clone itself as a UI terribly approachable. You know what to do. You get the right buttons and you click on them and it does things. It's pretty easy to figure out what clone does. The control panels are beautiful and easy to find and you can do tons of things like these content rules. Okay. But how do you set up your own clone site? And back in December at the clone conference, the 2020 clone conference that was held online by volunteers, but also I was going to say, okay, I'm not going to talk about that, but we did this great online conference and one of the talks I gave was about this concept. I came up with for my talk, which I called the clone in a box. Why do I call it clone a box? Well, we talk about in the clone community, we talk about how clone has all of these features in the box. They just come with it. And it's one of the great selling points of clone. It makes it so easy to sell clone. You want something? Oh, it's probably in clone core. Other competing content management systems. You got to do everything through add ons. You got to sit through all the add ons. Do I need this out on? Is it that one? Or is it that one? And you know that those competing platforms have these terrible security problems because even if their core piece of software is well designed and is secure, you've got to pollute that core product with so many add ons and they're all vulnerable. Eventually, one of them has a vulnerability that allows somebody to come in and just take over the whole thing. And you really don't want that. Anyway, let's play nicely here. Plone doesn't have that problem. And so clone is great. It's approachable in that respect. You know what you get. It's going to be secure. You don't have to worry about figuring out all the different add ons that you need to put in to make it do something basic. Obviously, there are lots of add ons you can add to clone. You can write your own. You can write your own integrations to other back end systems. But you don't need to. Your standard use case is like that of a nonprofit or a charity or a school or even a business or an intranet that you want to run. You can pretty much install clone on a server and use it the way it is out of the box. But as I was telling you, the ability for somebody to create a server, well, nowadays we do everything in the cloud, right? Who's going to run a website off the computer on their desktop? Well, maybe some people might, but you don't have to. And usually, if you're serious about using it, making your website visible or accessible to others and you want it pretty speedy, you probably don't want to do it from a home network. So you probably go to a number of cloud providers that would be Amazon AWS. It would be Linode or Rackspace or DigitalOcean. Or if you're crazy, you might want to use Azure from Microsoft. Here it goes. Or you might be kind of weird and want to use Google Cloud Platform. Geez, I hope nobody's listening to this. So let's say you want to use AWS or Linode and DigitalOcean. Those are really nice tools. Well, you can use Plone in a Box. And so let me show you what Plone in a Box does. We have this GitHub repository, which is at github.com, slash collective slash clone in a box. And this is a very easy way for you to provision a server and have Plone running on it in, seriously, in about five minutes. It's actually less than five minutes if you use Amazon AWS. It's maybe 10 minutes if you use Linode. And once we get it working on DigitalOcean, I think it'll be on the order of five minutes. Five minutes to create a cloud machine and to have Plone ready running on it with a few add-ons that I think are useful, particularly productive, and have it running for you so that you can start adding content to it within a few minutes. So the instructions I've got here, actually the markdown instructions here, I opened it up in this tab. And you can see we've got full instructions in here that talk about how you can set it up. And really, the short and long of it is you get your virtual machine, you get Plone, and you get these add-ons already installed, which are pretty easy to configure. And so there are some things that Plone in a Box will not do. It's, again, it's meant to get you up and running quickly. It's not meant to be a production-ready server, but you can make it production-ready if you can do the following things. They're not complex. Well, OK, I say that because I've done these before and I'm pretty comfortable with servers. But if you don't know how to do it yourself and you can't find help in the Plone forum, I'm sure that, well, actually, I'm sure that if you ask in the Plone forum, people will help you. But eventually, what I'd like to do, and I'm not the only one in the Plone community who would like this, we would like to make it really easy for you to go from having your Plone in a Box ready for you to play with and add content to and show your family and friends or your colleagues how they can do it, how they can join in and edit content easily with it, too. But to go from there to making a production server, we'd like to have a very easy path for you to follow. But more on that in a few minutes. So as I said, you could start off by, say, choosing Amazon AWS or Linode. And the first example I have described in here is if you were to use Amazon AWS. So I know you're going to ask me, how much does this cost? Well, for five easy payments of 49, no. It's going to be inexpensive if you get something like the smallest Amazon Elastic Compute Cloud instance, it comes to under $10 a month, maybe $7 a month, something like that. So it's a little bit more expensive than Linode's cheapest plan, which is about $5 a month. But if you're familiar with AWS, it's easier to use AWS. So essentially, you create an AWS account, and I've got a whole bunch of screenshots in here so you can follow along. You use the basic plan so you don't need to pay for it up front. And then, well, let's just keep scrolling. You log into your AWS console, and then you essentially search for an image, which is called Plone in a Box. And you say you want to install that image. So this is pretty much, well, this is a sort of screen by screen play of everything you'd have to do, but it's not complicated. And literally, you search for the thing called Plone in a Box, and you click Select. And once you go through a couple of next, next, next screens, which again, you can leave all the default values, you click Launch, Review and Launch, and then you wait about, well, three minutes, and you have a Plone site running at the end of it. So I'm going to keep scrolling here, and then you can view your Plone site. And there you go. Wow, it was even faster when I was talking to you now. So that's a Plone site running on Amazon AWS. That's a brand new Amazon EC2. EC2 is the Elastic Compute Cloud. That's the name. Amazon EC2 instance with Plone running. Plone 5.2 running on Python 3 with all of those add-ons that I showed you before. And you're off to the races. You can view your Plone site, you can log into it, you can start editing content on it, and it's running in the cloud. That's on Amazon AWS. So on Linode, it's maybe a little bit simpler. It's going to cost you a fixed dollar amount per month. Amazon AWS can charge you a little bit more depending on how much computing you use, how much CPU bandwidth and so on, disk space you use. Linode is a more predictable amount. It's pretty much what you pick. So the cheapest plan is $5 a month. They call it a Nanode, but it will run Plone just fine. So you go to Linode, you create an account, and then you search for what they call a stack script. And you search for it by name. Again, it's going to be called Plone in a Box, and you can find it. Where is it? These stack scripts, and it's called Plone in a Box, and then you essentially say, create a Linode with this stack script. So this process takes a little bit longer, as I said, about probably less than 10 minutes, and it goes off and installs Plone 5.2 on Python 3, and then with all those add-ons that I showed you. Same idea. Again, what I wanted to show you today is that if you're a nonprofit, you're a charity, you're a club, or you're a business, or you're a tinkerer, you want to run your own little intranet, you can set up a Plone site on a server in just a few minutes using Plone in a Box. And that's really what I consider to be Plone at scale for nonprofits. Next steps for Plone in a Box really are to make it even easier for someone who doesn't necessarily have a Linode account or a DigitalOcean account or an Amazon AWS account. What I envision is for, say, Plone the community and Plone the foundation to agree to setting up Plone in a Box as a kind of software as a service. Let's say, let's talk about one of those hypothetically full-featured content management systems. Okay, I'm just being sarcastic about this. WordPress is great. It serves a very important use case. It's really popular. It's not so secure, but it's really popular. What can you do when you want to make a WordPress site? You go to WordPress. I don't know. Is it WordPress.com? And you say, I want to make a WordPress site. You do a few clicks, you pay your $5 a month, and you get a WordPress site. You didn't have to provision a server. You didn't have to create an AWS account. You didn't have to create a Linux account or a DigitalOcean account. You just went to WordPress.com. You create an account, and then you click through a few options. You pick a theme, and then presto, you have a site at a domain name that is something.wordpress.com. If you want to pay a little bit more, then you can change what that domain name is so you can make it your own. So if you've got, I'm joeshmow.com, you can say, I'm joeshmow.com actually points to this new WordPress site that I created with a few clicks. That really is what I see as the future vision for Plone in a Box. I'd love for it to be something that we can create and run smoothly. We've had in the past another service. Good Lord. Oh yes, Plowed. Plone in the cloud that was created by Enfold Systems, and it seemed really promising. It seemed like a really good idea, but it hasn't lasted, unfortunately. I'm not sure exactly what the story was behind its shutdown, but that wasn't run by the Plone community that was run by Enfold. I think the Plone community would probably be able to make something like this, like Plone in a Box, as a service work. It might not make a lot of money, but it sure would be nice when people are thinking about using Plone. We already have many ways that somebody who's comfortable with Docker or Plone in a Box, the way it stands, with creating an account in a cloud provider where they can create their own Plone sites. I would like to be able for us to reach the kinds of really non-technical audience that is going to need a user interface and a process that is much easier and does not require technical knowledge. I would like that to happen, and I hope we can make that happen. I hope you enjoyed my presentation today, and I hope you're going to enjoy all the rest of the world Plone Day content. Plone is a really wonderful community. I don't want to tear up talking about it. I know it's maybe a little over-dramatic, but it's been a wonderful group of people over the years, such smart, welcoming, friendly, helpful people, very smart, technical people. It's a great family to join. It's changed my life. It's changed my career. It's allowed me to travel to places and see great people and meet great people. I would never have been able to meet without it. I hope you'll consider joining Plone and joining us and then having a beer with us or a wine or an aphorol spritz or say a Negroni someplace exotic. Thank you very much.
A case study of a student robotics team that needed a web site for which content creation, editing, and publishing could be delegated to students. A presentation of Plone in a Box™, an easy way to host a Plone site on a cloud server with just a few clicks and a few minutes for only a few dollars a month!
10.5446/54079 (DOI)
All right, thanks. So for last several lectures, I talked about one algebra, geometric model of the income model using braid varieties and related varieties and some structures that we can understand on that side. So today I will give completely different algebraic model for the income model for some class of links and this class of links is called algebraic links. So we start from a plane curve f of x, y is equal to zero in C2 and we assume that it has singularity at the origin and for now at least I will assume that the curve is reduced. So this equation is reduced but not necessarily reducible. So you can have several components and do this thing you can associate the link. So what happens? So you take a small sphere with the center at the origin like this blue circle here. So C2 is four dimensional. The sphere will be three dimensional and intersected with the curve C. This will be linked in three dimensional sphere. And if the sphere is efficient is small, it doesn't matter what's the radius, the logical type is all the same. And so for example, if you have just a line, maybe I should start with a line actually like this horizontal line here, this would give you a circle in a sphere. If you have a node x, y is equal to zero, then I would have two circles in three dimensional sphere and in fact these two circles will be linked. That's not completely obvious but that can be proved. And so we get this whole plane which we see over and over where we just have two crossings and two strands and take the closure of this link. But one way to see why you get them linked because the linking number between these two components, so how they're actually linked is equal to the intersection number, algebraic intersection number between these two curves at the origin. So there's two lines intersect with most PST1 and so expect the linking number one and that's what we see here. Another example is when we have a curve x square is equal to y cubed. So that's a cusp. And this corresponds to the tree foil T2,3 so this is the closure of this break. And in general, it's not so hard to see that if you have x of the m is equal to y to the n, this would be a torus link Tmn. So this curve has as many components as GCD of m and n. But here you have two components, here you have one component and in general you have GCD of m and n components and this corresponds to a link with GCD of m and n components. And there are lots of useful facts about these links. So first of all, irreducible components of a curve. So if you factor this polynomial of x, y locally in the neighborhood of the origin, irreducible component, locally reducible components of c correspond to connected components of the link L. So in the regulatory if c is reduced and irreducible, we get a node with one component. And in this case, it was actually classically known how to classify such nodes. So not every node appears as another break node. In fact, any other break node is an iterated cable of a torus node. So what happens is that you have this torus node, Tmn, then you take a small neighborhood of this. It's a torus, noted in a complicated way in S3, and then you draw another torus node on the surface of that torus and you keep repeating this procedure. And so there are some conditions on the parameters of these torus nodes, which are written out in the literature and there is actually full classification. So if you know what's Poulet's expansion of an irreducible plane curve singularity from Poulet's exponents, you can just read off all these parameters of torus links and how many Poulet pairs. So the number of Poulet pairs tells us the number of cables that we need to do. And for general links, it's kind of more complicated. So each component of this link will be just an also break node, which we just described. The way the link is kind of tricky and it can be read off the resolution of singularity of C, it can be read off some other things. And there is a nice book of Eisenberg and Neumann on algebraic links and Plum Tree manifolds where they describe it in detail, how to describe such links, how to classify them and so on. So if you're interested, algebraic links are classified quite well for a while. In any case, we have an interesting class of links, which is very closely tied with algebraic geometry by definition because it comes from an algebraic curve. And so the natural question is then to ask, if I know the algebraic geometry of the curve, can I reconstruct some invariance of the link? And to various extents, it was answered by many people. And so what we're focused today is the following conjecture of a block of Rasmussen and Schender from around 10 years ago. And it says that you take this curve C, you take the Heber scheme of k points on this curve C. So maybe I should write down on C, support it in the origin. Then you take just this joint union of all possible ks and you have homology of the space. And then the claim, which says that this homology is actually the same as again, degree zero part of triply grade homology of the link. And so in particular, if I know all these Heber schemes, I can recover this triply graded homology, at least it's bottom A degree piece. And they give a recipe how to deal with other pieces, but I won't talk about this. And of course, this has two gradings because we have the number of points k and you have homological degree. And this gives two gradings on the space. It's always infinite dimensional because there are infinite many Heber schemes, but this is also infinite dimensional up to some other things which we'll talk about. And then the claim is that these are isomorphic, at least this vector spaces or by graded vector spaces. And you can ask about more structures later and that we'll talk about. And this conjecture is still wide open. So I'll review what is known and what is not known. But it's quite remarkable from a big geometry point of view. So if you don't care about income, all the game, this, you can read it in two different ways. So originally, I think their idea was to say that this is a way to compute in homology. So at the time, nobody knew how to compute this in homology. And they say, well, so here's the explicit algebra and geometric space outbreak variety. So just compute this homology of this variety and this gives you the answer. And in many cases, as we'll see today, we can actually effectively compute the homology of this space by using a fine paving and different things and other tools in algebraic geometry. And so we know the answer for the income. And so in fact, this computations for torus knows they were definitely inspired by this conjecture. On the other hand, given all the progress in our understanding of the homology can reverse the logic and say, well, so suppose that we know income all day, then we know actually how much of the Hebrew schemes, which is still very, very non trivial. And I think one of the basic tests, which is still not proved in any reasonable form is that the left hand side only depends on the link. So you can have different curves with different equations, which give the same links. So you can slightly deform the curve within what is called a singularity class. The curve changes and the rainbow functions on the curve changes and the Hebrew scheme of points on the curve changes. The link, of course, doesn't change. And so you have to explain that why this homology of the Hebrew scheme doesn't change in this equi-singular families. And as far as I know, this is a wide open question, even except for torus links again. And there are other questions that you can kind of import from either side. So in particular, we talked about different structures on the income all day, like action of preline lowering action of these two logical classes. And you can ask what they're here. And they give a lot of insight on this side as well, which wasn't kind of expected from the beginning. So there is a very, even though the conjecture is still open in most cases, there is a lot of interest in interaction, and you can get some intuition from an income all day from known computations to get some intuition here, or you can get some intuition from here and some constructions, which I will review to get some structural results about the income all day left inside or at least conjecture them. So maybe let me pause here and ask for questions. So any questions? How do we build the link from the curve? And what does the conjecture say? So this Hilbert scheme is singular. Yes. And so is there any anything about the intersection homology? Not as I know, you can ask what is the intersection homology and how to think about it. I don't know. I'll give you some examples where this is in. But I don't know what to say. Any other question? Okay. So anyway, so here's one example, which is very, very concrete. So we look at the node example. The curve is x, y is equal to zero. And again, we're looking at the Hilbert scheme of points on the whole curve, just the point supported the origin because that's the easiest one. So Hilbert scheme of zero points on a curve supported the origin in just one point. So you have one point is there, so this is just the ring of functions. This is ideal. Oh, could I mention zero. So the Hilbert scheme of one point is again one point is the maximal ideal of that point at the origin. The first scheme of two points on this curve, because like you can prove it in different ways, but for example, this curve has more space to do with the origin. So the Hilbert scheme of two points on the curve is actually the Hilbert scheme of two points on the whole C2 at the origin. And many of you know that this is just Cp1. So the Hilbert scheme of two points on the curve is P1. And then there is a computation, which I will skip for the interest of time, but there are some problems in the exercise sheet if you want to do it. The Hilbert scheme of three points on this curve is actually a pair of lines glued at the point. We have two Cp1s glued at one point. And this one point is actually, so intersection point is the ideal generated by x square, xy and y square. This obviously has co-dimension three. It contains x, y, so this is really an ideal on the curve. And this is the singular point in this case. And then you have two lines out of the singular point. And then the Hilbert scheme of three, question? Okay. So the Hilbert scheme of four points is the union of three lines. So the dimension won't grow up. It will be two lines. Here we'll have three lines glued at two points. So these two points also correspond to monomial ideals. And in general, Hilbert scheme of k points on C0 will be a chain of k minus one lines glued in this way. So you have just a chain. And so it's really easy to test this conjecture in this case. So we can compute the homology of this Hilbert scheme explicitly. It is connected. So h0 is C. And you have k minus one components. So you will have h2, which is C to the k minus one. And that's all in this case. And I guess you can also try to compute intersection homology. And so in this case, I mean, that's the answer. You just direct sum over k. If you want to do a point correprenominal, which many people like. So what do you have? So for zero point, you get one. For one point, you get one. So this is what is this? This is sum of q to the i, C to the j, dimension of hj of h. So q degree tracks the number of points. T degree tracks the homological degree. And so if you have zero points, that's one. One point, that's one. Two points, that's one plus two square. That's homology of p1. Three points, that's one plus two square. So hj is one. And it's two is two dimensional and so on. So you add it up, it's a rational function, and you get one over one minus q. So for this one plus q plus q square plus q cube and so on. And q squared is squared divided by one minus q squared. And so this is the answer. And you can compare it with the answer for the Hoplian that we had before. So if you remember anything from first lecture, from second lecture, the homology was r plus r mod x1 minus x2, where r was the plenomial ring in two variables. And so the plenomial ring in two variables correspond to this term. And the plenomial ring in one variable, which you get by quotient by x1 minus x2, is this one over one minus q. And so these guys are separating kind of in homological degree in link homology. And this shift is given by this q squared t squared. So there is some change of variables, but up to this re-grading, the answers are indeed the same, at least on the level of graded vector spaces. And I think that's a very nice example, because you see kind of more and more and more complicated spaces, but there is still a lot of structure here. So it's not random. Like there is a lot of, I mean, the fact that this generating function is rational is not the coincidence by any means. Okay. All right. Any questions? Okay. So as I said, this conjecture is wide open in general. So besides some cases, it's not known. It's known on the level of Ola characteristic. So there is a monumental work of Davesh Monik, who proved that if you take the Ola characteristic of the hybrid scheme of k points, then you recover the home-flap polynomial. So that was a separate conjecture of a Blomkopf and Schender about this Ola characteristic. And you, home-flap polynomial depends on two parameters, q and a. You said a is equal to zero, as you would do in the homology. And so Davesh used a lot of machinery to prove these results. So he related this Ola characteristic to the topic of this conference to counting curves and counting stable pairs supported by this curve and various relations between PT and DT invariance and blow-up formula. So there was a lot of technology directly related to the topic of this conference, which unfortunately I will not review. But let me just say, so this is related to unstable pairs. So basically you relate the ideals on the curve to stable pairs supported on the curve. And so you have PT and DT invariance and the blow-up formula of this. And so there is a lot of technology related to kind of counting ideals on the plane or on three-dimensional, three-fold related to this thing. And the counting ideals supported on this single curve. And that's very powerful proof, which is indication that something is definitely going on. And then a much more kind of low-tech thing is that we know the answer for torus nodes. So for torus nodes, and I don't think we know the full answer for torus links yet, as far as I know. But for torus nodes, at least, with one component, you can compute both sides. So again, HHH was computed by recursions from lecture one, which I kind of sketched and didn't really explain. But the work of Hogan company made it in particular. They give very, very explicit recursion and very, very explicit combinatorial formulas for these dimensions of different graded pieces. And on the other hand, hybrid scheme of points on the torus nodes. So torus nodes corresponds to curve x to the m is equal to y to the n. This is the curve with torus action. And so as such, it has a paving by a fine cells, which can be contracted very, very explicitly and combinatorially. And you can enumerate all the cells. And I will review this enumeration in a slightly different setup. And there is a combinatorial formula for dimensions. So I think in this setting, this was explicitly written in this paper of a blog for Frasso Sin and Schender. And so they say, what are combinatorial data enumerated in the cells? What is the dimension of a given cell with this combinatorial data for every k? And so there are five anti-manus cells for every given k. You can enumerate them and you can compute the punk rapidly on all of them. So this comparison between these two combinatorial answers solves the problem for torus nodes just because we can compute both sides. And again, the methods of computation are very different. And it would be very exciting to explicitly compare recursions here and recursions here. And that's not fully done. Like to find the geometric analog of these recursions on the left and what they actually mean, what does k mean, for example, which appeared in the recursion. And in any case, so maybe the main outcome of these computations of hooking up in methods, we can now verify this connection. And then besides actual computations and actual proofs, you can ask for very structural results. Like if interesting structures on either side, do they match something on the other side? And for example, recall that if you have an r component link, then last time I think I explained that triplicate homology has an action of polynomial algebra, c of x1 through xr. So for each component of a link, you can put the marks point and that gives you a polynomial action. And that is a nice invariant of your link. And so it is important because like usually we're dealing with huge infinite dimensional spaces. So for example, here you had two component link and you have this explicit structure of a module over two variables x1 and x2. And you can ask, well, so the left hand side, do you have the structure of this module? And the answer is yes. And so for one component, there is an action of polynomial algebra in one variable and actually of the Heisenberg algebra constructed by Malik Jung, Milirini-Schende, and random also randomly explicitly constructed the Heisenberg or while algebra one generator. So he constructed kind of the action of x and d over dx in some sense. And the commutator was one. And so he constructed the action of both x and d over dx. And then for more components, this was done by Oscar Kivinen. So for roughly speaking, for each component of a curve, you can add a point. And so if your link has r components, this means that your algebraic curve has i r components. And very roughly, x i adds a point on this i component. But you need to be very, very careful if you want to say this properly. So all these mod light spaces are singular. So you can't just naively use correspondences to define operators in homology. So the way that it was done, and I think all these papers really, is that you have to use a versatile deformation of the curve, look at the versatile family and the family of hybrid schemes associated to the versatile family, and define some correspondences there and use the fact that the versatile family is actually smooth. So the key fact observed by many people here. So you have C tilde is the versatile family, then hybrid scheme of C tilde is smooth. So do it fiber wise for each curve in this family. And then you can add a point for each component, one component at a time. And you have some analog of this d or dx as well, which was also constructed by Oscar. And another thing which is useful, which I want to really talk about is that you can also look at kind of global curve. So here I have hybrid scheme at the origin. For some of these actions, it's much better to look at the whole curve C or compactification of C and Cp2. But I won't talk about this. And another thing is that you can do get a lot more structure if you add, break the symmetry. So recall that our construction of Lincoln module uses the braid. So it doesn't start from a link itself. And here, one issue by this conjecture is complicated is that you don't get a link as a diagram. You don't get a link as a product of crossings. You get a link as like the whole subset of S3 given by intersection with this curve. And it's not clear where the crossings, where is the braid, where is anything resembling the things that we saw before. And one step in that direction is to choose a projection of my curve to some line. So this is related to what Richard mentioned yesterday that you can think of this C as a spectral curve. So you choose this line. Again, everything is a fine. So this is just a straight line with coordinate X. And we project our curve onto this line and there is some degree of this projection, which is N. And so if I push forward over the structure shape of C, I get a rank N free module over the ring of functions on the line. And again, I completed the origin. And on this ring of functions, I have an action of Y. So the other coordinate, the vertical coordinate, which acts by multiplication by Y. So we'll see some example in a second. And so this is more structure. And as we will see, this more structure gives us more control of the situation. And maybe I think I didn't write it here, but I can say this now. So the choice of projection, what does it mean in terms of the link? So note the choice of projection for C corresponds to the presentation of my link as a break closure. And so this is the picture that I think many of you have seen, that you look at the small circle on the base of this projection and you look at the preimage. So the preimage of this point will be N points on the curve C. And as this point on the base goes around, you have some monodermen. And so this picture gives you what is called the braid monodermen. Because for every point I have N points, for every point on the base, I have N points on the curve. And then if I start moving this point on the base, these N points start twisting around. And they get linked to each other. And they behave like a braid or braid closure. And kind of more abstractly, you can think that the fiber is one dimensional complex line. So this is a real two dimensional plane. And then you have N points on this real two dimensional plane. And then as we go around, we have a loop in the configuration space of N points on two dimensional plane. That's a closed braid. And so naturally we can associate a braid or braid closure or a conjugacy class of a braid to the choice of this projection. And again, for different choices, we'll have slightly different braids on different number of strands because the degree of the projection could be different. But this is additional structure, which helps a lot to kind of understand better what's going on. And so here is a concrete example. So if I start from a curve from the cos x square is equal to y cubed. So we can write the ring of functions on the curve as well. Polinoimals are a series in x and y more this equation, but you can also choose the basis in the ring of functions. Here you have all possible functions in x and spanned by one y square and one y and y squared. And so if I y cubed, you can express it as x square. So you don't really need y cube, but you can always write it in this form. The ring of function is actually a free module over C of x of rank three. And this is the basis of this free module. And moreover, we can describe the action of multiplication by y in this basis. So one goes to y, y goes to y square, and y square goes to y cube, which is x square. So the matrix of multiplication by y is this thing on the right. So one goes to y, y goes to y square, and y square goes to x square. So in this case, the degree of the projection is three. Here for I have free module of rank three, and I have this three by three matrix. And on the biological side, this corresponds to choosing this as a closure of three-thrand break. So here this would correspond to three-thrand break, which you can actually write down. And then it goes. And if instead you project to the y coordinate, then you can also regard this as a free module over y, so this C of y, with the basis given by one index. And again, x square, you can now eliminate and replace by y cube. And then my capital X is the matrix of multiplication by x. And so here one goes to x, but x goes to y cubed under multiplication by x. And so this corresponds to the two-thrand break, which looks like this. And there are, of course, different descriptions of the same curve. So typologically, you can say you can get the same link by closures of different breaks and different number of strands. And algebraically, you can just project the different lines. And as Richard explained yesterday, knowing this data actually is enough to recover the curve. So if I know this free module, frame three, for example, with this operator y, depending on x square, I can recover the curve easily. And namely, the equation of the curve is just characteristic polynomial of this matrix. So you have this matrix depending on x. If I take characteristic polynomial, I get the equation y cube minus x square. And then the roots of this characteristic polynomial are precisely the, the locus of the roots is precisely the spectral curve C. And I think in this case, we assume that our curve is reduced. Then the characteristic polynomial is the same as minimal polynomial. So we don't have some of the issues that Richard mentioned last time. And so again, just to sum up, so this is completely formal procedure that we can replace a curve just as a curve on the plane. By choice of this projection, we replace it by the following data. So we have a free module over, let's say, power series in one variable together with an operator, which depends on this variable x. So any questions about this construction, any questions about this kind of relation to braids and stuff? Okay. So if that is clear, then we can define what is called the fine spring of fiber of C. So this would depend not only of C, but also of this choice of projection. And so we look at all possible subsets, subspaces in Laurent series in X. So here I have power series in X. Now I want to do Laurent series in X. And I want V to be a lattice. So I want this to be a module of maximum rank, which is this technical condition. And more importantly, I want this V to be invariant under multiplication by X. So this is, and I want this to be invariant under multiplication by under the action of this operator Y. And so this is a subset in a fine-gross monon. And you can think of this a fine-gross monon, which Joel already introduced last time. This is just the set of the same V's in CN of X, where just X V is subset of V. And again, you have this latest condition. So if you like a fine-gross monon, so you can think that you have this space of all V's invariant under X, and then you put additional constraint that this V is invariant under this matrix. And that is why it's called a fine-spray in V5 in some sense. And so this is this subset of a fine-gross monon. And so what do we know about this subset? So while if C is irreducible and reduced, then this fine-spray in the fiber is the same as what is called compactified Jacobian of C, Devonekko. So yeah, maybe it's compactified the car. I mean, locally everything is local, so it just doesn't matter that much. And up to some shift on the latest, which I ignore. And this compact, this is very close, let's say, to compactified Jacobian of C, which is defined as this mod-like space of rank 1 to ocean-free shifts on C with some framing because we work locally. And a remarkable result, which was known for a while, is that this is actually the same as the Huber scheme of a large number of points on a curve. So if your curve is irreducible, and this is really, really important assumption here, so and reduced, then the Huber scheme of points actually stabilizes for sufficient large N. And this fine-springer fiber is this stable Huber scheme. And it's the same as compactified Jacobian. So you might have seen this in different settings. And in particular, so this kind of looks different, but in fact, it is the same thing as what we started for the Huber scheme. And even more is true, so the Merlinian Schande, and one you can improve that, in fact, this homology of Huber scheme of points on the curve, which were the subject of that conjecture, is completely packed in the homology of the fine-springer fiber. So you have your curve, you choose a projection in some way, and then you have your, you take the homology of that, that carries so-called perverse filtration, which I will not define, but it's an interesting filtration in homology. And the claim is that if you have this homology of a fine-springer fiber equipped with perverse filtration, then tensor with polynomial ring in one variable, and you recover the left hand side, the homology of the Huber scheme of points on C0. And maybe I want to mention here that, for example, by this work of Rennemann, so the left hand side has an action of this Heisenberg algebra in one variable with x and dr dx, and so this is the same x, this is the same polynomial action on the left. And because there is an action of dr dx, it's actually a pre-module, so we just kill this free action, and what's left is this. So this is very, very natural thing, and in fact, yeah, you can define, like if you know the action of this polynomial ring on the left, you can use this to define perverse filtration in essentially a unique way using this formula and using the work of Rennemann. But they did it slightly differently using, again, the universal deformation and the composition theorem and other things, which I don't have time to talk about. And moreover, Moliuk and you only prove that there is an action of SL2 on the right hand side, on this finite dimensional thing. Yeah, so Oscar is commenting that if R is bigger than one, you have to replace sp gamma by sp gamma quotient by some lattice, which I will describe in a second, what is the lattice. And maybe we'll come back to this one. But like in the reducible case, for one component, so they prove that there is an action of SL2 on this homolog and this is kind of less known about the paper, but I think it's a beautiful result that you always have an action of SL2 here generated by cup product with some class in H upper 2. So raising operator just the cup product in a cell with a class in H 2. And then they prove that this also satisfies curious hard lefts. So if you remember anything from last lecture, there was an action of SL2 in the income logic which satisfied some kind of curious hard leftist property. And here again, you have this, but this curious hard leftist is with respect to perverse filtration. So again, this is a singular space in general. And we'll see example where it is really, really singular. And on the singular space, you wouldn't expect the normal hard left shifts and point credibility because it's just not true. You don't have it. But if you take associated graded with respect to perverse filtration and shift the degrees a little bit, then it becomes symmetric. And then it actually satisfies this hard left shifts and everything. If you do it properly with respect to perverse filtration. So this is really, I think remarkable results. I think, yeah, this is just proved in this paper of Malikovsky. But this is very, very nice paper. And this, as I said, compared this with curious hard leftists for weight filtration on grade varieties from lecture three. Okay. So maybe a couple of examples. Okay, let's start with this example. So if I have the node again, so I want to write the node as x square is equal to y square because I want to choose a nice projection. And I can choose the metrics in different ways. Let me choose the metrics like this. You can also choose the metrics as x zero, zero minus x. And that will be useful for us in a sec. And then the corresponding affine spring of fiber is actually bad. So none of these theorems apply. This is not in a reducible curve x square is equal to y square. And a fine spring of fiber is actually an infinite chain of P1s. So you need additional structure here to control its homogen. But this is probably the most well known example. Another well known example is that you have a cost. So the spring of fiber for affine spring of fiber for that thing is P1. And this is the same as compact phygecobin for the cusp. So I think I don't have time to explain this. In this case, it's nice and smooth. And the perverse filtration is trivial. So there is nothing there. And what is known in general is that if you have x to the m is equal to y to the n, again, this corresponds to a torus node, GCD of mn is equal to one. And then lots and lots of people study this space. Many more than he was given this curve. But again, essentially, this is equivalent. And many, many people proved in different ways that this is paved by affine cells. And you can parameterize this affine cells by lots of different combinatorial data. And you can compute dimensions of the cells. And this is beautifully related to the tier of Q2 decadline numbers and combinatorics. So the references, I think the first is Leustig and Smalth. Who first started this affine-springer fiber. And then Piantkowski, who started this as compact phygecobin. And then Hikita, who started this space and generalization to affine cells. And then our work with Mishomai Zin and Monika with Irani, when we started kind of more combinatorially and related this to Q2 decadline. And so there is a lot of work on this thing which I won't be able to cover in detail. But one great example is that you have the curve x cubed is equal to y to the fourth. So this is this corresponding SPY or compact phygecobin is this thing. So this is a cone over here, it's a brook surface. And so this is singular, you have a singular point at the tip of the cone. And so how does this cell decomposition work? So here's the brook surface, had one zero cell, two complex one cells and one complex two cells. Then it take cones of this. So you will have two cell, four cell, two four cells and one six cell. And then you have the vertex which gives you the zero cell. So this is really projective cone. Oh, the fine cone. And so the homology looks like this. H0 is one dimensional, H2 is one dimensional, H4 is two dimensional, H6 is also one dimensional. And you can compute the perversal iteration. It's not so easy, but I mean, you can compute it using the Haver schemes and the theorem of a molecule in the dimension, you can compute it by definition. And it turns out that H4 has two pieces in different levels of perverse filtration. So it looks like this. And again, the reason here is hard left-shifts because you have a class in Asia for two, you can composite and then you have the top thing will be like, so this will be maybe alpha and this will be maybe alpha square and this will be maybe alpha cube. And there is another generator beta over here of different perverse degree. And this should be compared with E6 picture. So the picture of E6 plus the right from last time, where I had exactly the same thing for weight filtration in the homology of the braid variety. And so this example and many other examples and also more general work of the Katal Dachauzel Milirini led with action to conjecture that this is actually always true. So he conjectured this also about 10 years ago that homology of braid variety or some analytical braid variety together with the weight filtration is actually isomorphic to homology of this finite spring of fiber or compact phytocobin together with perverse filtration, at least in this irreducible case. So in non-reducible case, it's much worse. And as far as I know, this contract is really wide open even in this case. I mean, you can whenever you can compute both sides, it's true. And so I guess that's all what we know. And I mean, there are some deep ideas why this might be true, non-Abelian-Hoch correspondence and things like this, but maybe I don't have time to talk about this. But maybe what I want to say is that the left hand side is an open but smooth variety. So this is non-compact but smooth. And this thing is compact, but very, very single. And so there are just two different settings. And it turns out that they have the same homology. So this is non-compact and always smooth. At least like in all examples we need. And this is compact. And again, you can probably ask about intersection homology here, but I don't know what would be the right question. Okay. And another thing which I want to mention, since I mentioned the pathological classes last time, is that we constructed or I kind of indicated how to construct the pathological classes on the left and what are the weights and stuff. And on the right hand side, at least for this particular singularity, X of them is Y of them, a Blomkov and Hewn constructed an action of the pathological classes on this side. So they really come from some vector bundle on SP gamma. And they proved, so this is a big result of a Blomkov and Hewn, that this homology is generated by the pathological classes like it is here. So there are two pathological classes, alpha and beta. And then there are all relations between these classes pretty explicitly. So again, like if we believe in this conjecture, you can ask, is it true on the left? So we have the pathological classes now. Can we write the relations between them? Can we verify them on the left hand side, even though we don't know if the conjecture is true? For example, is it true that homologies are isomorphic, not only as graded vector spaces, but as rings? And that's a very good question, which one should look at. And so in a slightly different direction, I want to mention the result of Oskar Kivinen. So what happens if you have a lot of components? If you have a lot of components, like in the extreme case, you can have x to the kn is equal to y to the n. So here we'll have, so this corresponds to q and kn. So here we'll have n components. And all components are not, all the in-key numbers are k. And we saw this example last time, actually. So the corresponding n by n matrix has this form. So you have diagonal matrix with roots of unity, all different roots of unity of degree n on the diagonal times x to the k. And for n is equal to 2, this is, this matrix x and minus x and 0, 0. In general, we have all roots of unity. And so you can ask, what is the finite spring of fiber for this particular matrix? And this was started by many people, starting from Goreski, Kvartov and McPherson and others. And so in particular, you have an action of the lattice, which acts on this finite spring of fiber by translation. So n minus 1 dimensional lattice acts on it by translations. And you see it here, sorry for scrolling up and down, that you have an infinite chain of p1s and it's big and kind of complicated, but you have an action of Z by translations, let's say one step at a right or to the right or two steps to the right, depending on what you want to do. And you also have an action of the torus because the torus, she started with n, commutes with this matrix y, so it stabilizes this and it acts on the finite spring of fiber. And so what Oscar actually proved in this theorem is that if you take just homology of sp gamma, it matches the link homology with all the structures. So we have the action of the lattice, it corresponds to the action of these polynomial variables up to some subtleties. And equivalent homology of this finite spring of fiber matches y-fied or deformed homology of the link, which depend on additional variables y1 through yn and which are identified with equivariant parameters. And so I don't want to write these answers again, but you have explicit formulas for this in lecture three. And I will return to this probably next time, but again, I don't have time for it today, but there are some explicit ideals in the polynomial ring involved. And so basically he compared this homology and with all the structures to these ideals in the polynomial. And this shows that not only we can identify this homology as a vector space with link homology, according to a Blanc-Couffreuse-Senshinny conjectures, but also we have this additional structure of acting of texas and ys, and we can perfectly see it in this setting. So and in this setting, it's actually easier to work with a finite spring of fiber because you really see this lattice action, and you really see the torus action. And for the Huber scheme on a curve, you don't really see that unfortunately. So you can ask what is the meaning of that, but that's a separate step. And so maybe the last thing which I want to mention today is a very recent work of Gardner and Kivinen, which I think is very nice. And I want to advertise as much as possible. So now you can take any curve C, and it could be irreducible. It could be reducible. It could be even non-reduced. So for this, it doesn't matter. So general and so nice that you can have any equation of a curve, not necessarily reduced. And then what they proved is that the Huber scheme of points on the curve is not in a finite spring of fiber for JLN, but it is actually generalize the finite spring of fiber. So there is a notion of generalize the finite spring of fiber depending on the group JLN and representation N, which is in this case the Lie algebra of JLN and together with the vector representation CN. And so there is generalize the finite spring of fiber. It depends on the vector in N of X. It depends on a vector. And we choose this vector. So this is the matrix, n by n matrix and the vector in CN. So we just choose it to be this Y and then a vector, let's say 1, 0, 0, 0 in appropriate basis. And you can say properly what it is. And one consequence of this is that there is an action of a very interesting algebra game. We're up to interesting structures in cosmology. And so Braver-Mann-Finklberg and Nakajima for any G and any N, they define a certain algebra which is called the BFN-Gulumbra and Chalgebra, which Joel started defining yesterday and well, I guess, defined fully today. So this is really from Joel's talk. This is the same algebra. Joel's lectures. And one result is under some mild assumptions, this algebra for G and N acts on the homology of a finite spring of fiber for G and N, for any choice of vector, really, under some really, really mild assumptions. And so in particular, there is an action of this big interesting algebra in the homology of the hybrid scheme of key points on the curve, direct sum of the world key. And this unifies all the construction that I mentioned before. Furthermore, if a curve is glasicomaginous, if it means a cis-direction, like our friend X today, it was equal to Y today, then there is a cis-direction on the curve. There is a cis-direction on the hybrid scheme. And you can look at equilibrium and co-homology of the hybrid scheme. And there is an action of some other algebra on the homology of the hybrid scheme, which is known as quantized BFN algebra, which I'm sure Joel will define today. And this is a non-commutative algebra which acts in this equilibrium and co-homology. And so it helps a lot to compute this co-homology and gives very rigid structure to it because it's a model over some very explicit non-commutative algebra. And in fact, Kodera and Akajima describe this algebra for us. So they said that this, for this particular choice of G and N, where G is the group Gila, and N is the joint representation of the algebra of Gila N plus Cn, this quantized BFN algebra is known as, is nothing but what is called as rational Turingic algebra or spherical actually rational Turingic algebra if you want to be pedantic. And this is an algebra which people studied. And you can describe it very, very explicitly by generators and relations and so on. And so this result says that there is an action of this algebra which people understand in the homology that we want to study, which is supposed to be in homology or a covariant version of the homology. And so this raises a lot of question, is there an action of this rational Turingic algebra in in homology or kind of deformation of it corresponding to this equivalent parameters is there, can we identify this representation? And in many cases, the answer is yes, and they identify this representation. And what happens for non-reduced curves? So here you can start doing non-reduced curves, they still have this direction and you still have an action of some algebra. And there is a there are lots of interesting questions here. And maybe one specific example of non-reduced curve is x with the n is equal to zero. Also works. So I think this is the first example where you can understand properly the homology of the Huber scheme on non-reduced curves. So you have ideals in the ring of polynomials and x and y mod x to the n. And you have some interesting structure there, you can compute the homology and the same technique shows that you have an action of this quantized BFn algebra in this equivalent homology. You can identify this representation very explicitly. And I think this is very nice and kind of brings all the structure together. And I would expect that maybe other especially non-reduced curves can be studied by this machinery. Even though you don't have a solid decomposition by geometric representation theory gives you another way to study this space. And I think that's all for today. Thank you very much. And again, Oscar is there in the audience. You can ask him after the talk for more clarification. Any questions for Eugene for Oscar? Is there like a key theory recursion of this last thing? Yes. So you would have a Dachau, some version of Dachau. And I'm not sure if it's worked out in full detail. So if you would have GLN without you today, I think this is really just the Dachau defined by Sheridnik. And this was worked out by Etting of Brebermann and Finkelberg, I guess. And if you have the CN, I'm not sure if the K-tieretic, so there is certainly a notion of K-tieretic B-ven algebra. I'm just not sure if that was computed explicitly. Yeah, it's trigonometric. It's trigonometric. Aha. Okay. So then you have trigonometric Dachau. But I think like all this would work in K-tieria because I think so. I mean, the construction is very general by some correspondences. So that should work, I think. And would this be related to any, like, nothing variance, like on the other side? I mean, this space, the homology of the Hubris scheme, is supposed to be the same as the link homology, right? So you would have an action of this algebra and link homology. Here you would have some kind of deformation of link homology with one extra parameter, which might be related to this Wi-Fi homology or not. Because I'm not 100% sure here. But you would have an interesting action of this algebra there. And you can ask, well, so you have this lots of interesting operators acting in link homology. Do they, are they related to the topological classes that we constructed last time? That's an awesome question. Do they give you extra structure in link homology? I don't know. But these are all excellent questions. So, yeah, I wanted to ask, yesterday when you discussed this, why deformed homology for things? You had this operator psi. Yes. And some DG structures. So do you see them on the homology of spring or fiber side? Yeah. So these size are casul dual twice. And so there's like a variant parameters here. And you can also find kind of, because you have C-stire action, instead of looking at the equivalent homology, you can look at non-equivariant homology with kind of the action of homology of C-star. And then size of generators and homology of C-star. But they're really casul dual twice. So maybe I want to mention this. And this means that you have kind of interesting degree one operators in here, which you can study and they're not closed, but you can control them. But you can construct them, close the practice as well. And maybe what's more relevant is that, I guess, you can take the quotient by the latest action, which you like and many people in the audience like. And there's still is an action of the torus. And I think there you can see the size more clearly. But I don't know if that is explicitly worked out. Okay. Thank you. We also have a question in the Q&A. So is the F-1 Springer fiber related to the Sato-Grassmanian and Katz-Schwarz operators? I don't know. Maybe I want to say, since Oscar mentioned this and I didn't say this, and I think it's related to bank squation. So you have this example. So you have this example where x-square is equal to y-square. So we have an infinite chain of p-months. And you have an action of z by translations. And you have an action of t star, which scales all these things. And you can take the quotient. So this is my SP. And if I take the quotient by z, I will get just one line, which glued to itself at a point. So this is a line glued to itself. And here you can see that you have h0, h1. Oops. One second. H0 is one addition to one second. Sorry. Can you see it? You can see your screen here. Okay. Great. So this is the line glued to itself. And here I have interest in h0, h1, and h2. And so there are several comments here. So one comment is that this matches the computation for this one plus xy is not equal to zero, which we saw last time. And again, this is a very, very different space. So here I have a line glued to itself. There I have kind of an open subset of c2, but the homology is the same. So this is another argument towards the x-conjecture. And another thing is like how do you actually, and you still have an action on c star on this. And so you can either look at a covariant homology, which is fine, or you can compare it to the income homology where we had this r, r, r, that was zero, and that was x1, y1, x2. And then if there is no c-stirection, so we kill xi, and we get just three copies of ccc, with zero differential, right? So if I kill xi and r, I get just nothing. I get just c from each of them, so I can take the reference of product with c over r. And so my odd variables, xi that PENGA asked about, they would act here. And so you would have an action here. And so I think you would have an action of odd variable in this homology, which you see very clearly, that you should have an action of basically homology of c-star here, which would go, I think, from h0 to h1, but I might be wrong. I can't think of it in the spot, but this is certainly in the same space, in the same vector space, that this three-dimensional homology of this quotient, by the way, is this three-dimensional homology. And there should be an interesting error corresponding to this xi. And there should be another interesting error corresponding to what I called u2 yesterday. And this u2 gives you a class in top-dimension homology. So yeah, homological operations here would give you interesting operations in this homology over here. And again, I mean, by essentially casualty, if you know this, deformed homology as the module over x's and y's, you know this undefined homology as a module of xi's and x's. And you have all the structures. And again, this latest action and the reverse action played the role of different variables in the homology that we saw. And there is a casualty between homology and Braille-Mour homology, which you can ask Oscar about. Yes. Okay. Do you have any other questions? Correct, Eugene. Thank you so much. All right. Thank you so much. Thank you so much.
Khovanov and Rozansky defined a link homology theory which categorifies the HOMFLY-PT polynomial. This homology is relatively easy to define, but notoriously hard to compute. I will discuss recent breakthroughs in understanding and computing Khovanov-Rozansky homology, focusing on connections to the algebraic geometry of Hilbert schemes of points, affine Springer fibers and braid varieties.
10.5446/54862 (DOI)
you you you you so it's an extremely important need and if we look at how much time we spend working, like a third of our day usually, if you leave sleeping out as a pretty safe spot, hopefully, then it's half of our time that we spend at work and then the question is how safe are we at work actually. I have spent the past ten years working on safety solutions or software to support safety at work and I will use this today to make a case why Plone is an excellent solution for large and long running products, projects. First some facts. We have lost in 2008 43 billion euros in productivity by work incapacitation. That means people could not work because something happened at work. They were injured. They were in recovery. In relation that is 49 billion dollars or 5.5 trillion yen. Another statistic is that every 15 seconds some employee worldwide dies due to work related accidents. Still this is a number from last year. In Germany we only have 638 deaths per year. If you compare that with traffic accidents statistics traffic accidents don't really matter. And we only I say only 638 because we have already 200 years of work in the traditional workplace safety which is not a common thing. So the core message is safety at work matters. It's a big thing and if you can optimize something here you are actually saving lives you are saving health. You can do a lot. So the question is where does it come from? Already in 1839 a guy called Friedrich Wilhelm III von Preusen visited it. And he is really the pictures in his uniform are being a soldier actually. So why should a soldier be interested in workplace health? Any ideas? Because where do you get your soldiers from? That's the young people right? And if you have child labor there are no good soldiers anymore because they are already used up. So that was a very selfish thing he wanted an army so he needed to make sure that the kids that will make his recruits are properly functioning. So he makes the first law against child labor. In 1884 another nice soldier guy we have in Germany Bismarck whom you also would not associate it with workers concerns does a similar thing out of the similar reasons and he found these workers compensation boards which are very strong in Germany at least. And in 1974 Germany gets the law that we currently have on workplace safety which is quite strict and also in great ways has inspired the European workspace safety rules. Why is that a thing? Why are doing people stuff like that? Actually it's cheaper for the employer because the cost that you suffer from when your people are not working is much higher than the money you actually have to invest in creating workplace safety. Then of course the reason is we have to. Most of the companies do it because it's legislation but studies have shown also that the whole company benefits because people participate and think about their workplace and they are not only creating better workplaces but also more efficient workplaces in the same and same approach. And then of course contracts may depend on it because your customers might require you to have workplace safety rules in place. So who has to comply? In Europe at least it's European law which applies to everybody there but there are similar legislation in Japan and the US. Every company that produces something that potentially has machinery running with more than 10 people. Service providers who mainly have workplaces from 30 people and all the big companies do it. Not even close to all medium companies do it and nearly no SMEs do it. This is actually punishable if you have several times not adhered to the resolution resolutions and people get injured or even die. This is not just punishable by money but you can actually go to jail for that. Interesting here is that this is not just a regulation that Europe has imposed on the companies but they found also that they cannot really require small companies to do risk assessments all the time because it's really, it can be really expensive especially for the small ones. So they started supporting them. So how do you create safety at work? Basically by identifying whether you have a problem, evaluating it and handling it. It's a very simple approach. For example here you could identify that it can happen that you have water on the floor and if you have water on the floor people can slip. And then you plan an action. For example if somebody is cleaning the floor then you have to put up some protection fences or these nice yellow stands that you often see. A bit more extreme example is here we have a problem that is secured robot production plant and somebody is doing maintenance and somebody else outside does know that there is somebody doing maintenance and he just turns it on. Why is it off? I turn it on and boom you have a dead. If you're really into this you can Google for workplace accidents in YouTube and you see movies that keep you from sleeping. Machines that just suck in people and they have filmed that. It's really not a nice thing to say. But you can quite easily do something. You can just put a lock on that. If you do maintenance first thing lock on the toggle and then you can do your maintenance and people are informed that something is going on. So doing a risk assessment means you identify, you solve the problem, you train your people, you repeat an update. The problem here is that it creates a lot of paper. Massive amounts of paper. You record problems, you write it down. You document that you did it. You write it down. You plan measures. You write that down. You plan remedies. And of course because it's a legal requirement you have to document that you have done it. So if the authorities come and want to see whether you did your risk assessment they expect such a piece of paper. And of course nobody is reading that but they are putting it because you have done it. So it's archived file for the case that something is happening. And if something happened they will pull that out and will see whether you have taken measures or not. The first argument that companies come with is, oh, that will cost me even more time. That will keep me from my actual work. It's not up to date anyway. I do it once and then I have to renew it in two years. So from next month it's not up to date anymore. And I can't read all of this anyway. And it's just more bureaucracy. But then it's so simple. You could identify the whole thing mobile, elaborate at your desktop, do an online planning of actions. Then immediately you have a digitally so you could also make your training out of that. You can generate your reports if somebody still wants that paper. And export your training material if you have to do some training of sight. Another thing, another good thing for keeping it online is transparency. If you have that stack of paper that is placed in one place in the company, probably locked away in an archive. If you have it online everybody can have access to that. Even the work council or the company physician, the doctor could check if he gets a lot of back injuries in where are you working and what is in the risk assessment. Is there something we can improve? So you are helping people to self-organize even. Heads of divisions release assessments when they have found it to be okay and you can do your online trainings. And even updating is much easier because system could send out just simple notifications to the heads and say your risk assessment needs updating, take a look. And you're of course update based on the former version. You're not writing the whole thing again but you are taking your old version and checking whether it's still valid. Much less work, much less bureaucracy, much easier. And that's what that actually looks like. That's the welcome page that does some explanation. You have, you can log in. In the EU version you can register with any account. There is no validation. It's not even checking whether the email is true because nobody wants to audit you. It's a tool for the companies to help themselves. Nobody is really checking whether a company is doing it properly. They can get some checks if they want to but it's not done by the EU. So then you get a listing of your assessments that you have performed and there are templates. So the sectoral organizations of the different countries can provide the checklists and then the companies of these sectors can just take the checklists and fill them out. And of course the sectoral organizations know quite good what are the risks in my sector. In my development company I don't have machines. I don't need checklists that ask for machines. But if I have a cleaning agency I have water on the floor all the time. And then you have questions like this. And it's quite easy. You get some, you have a problem up here and you can say yes I have a problem or I don't have it. Or in this case I have it under control. Yes I will. Or it's not applicable. And I can read more information below if I don't understand the problem immediately. And there is even space for legal paragraphs where you can explain why you need to check for that. If you have a problem and problem is always no, no is the negative thing then you can say what is the priority of the risk. Or there are different evaluation mechanism where you can say how probable is it that this problem occurs and how lethal is it. Like if something is highly probable that it occurs but the only thing that you get is a blue mark, yeah, you can re-prioritize it down. But if it happens seldomly but you can die from it then it's kind of automatically high priority. So there's a calculation mechanism in it that helps you to prioritize. Here's the example what is a chance of the risk occurring, small, medium, large and how often are people exposed to that risk. Here is an example for legal and policy references. Why do you have to do this or why do you have to check for that? You can even write your own comments in. And that's identification phase, identification and evaluation. So you go through your checklist and after that you know all the problems that you're having. Once you have all your problems and so if I identify them you can do your action plan. So you can write down what you're going to do about it. And that mainly means you describe the measures that you're going to take. If there's water on the floor I'm going to buy these little yellow stamps and I'm going to train my staff to set it up when they start to clean and to remove it when it's drying. So simple thing, description, specify actions, level of expertise needed and who is responsible. And then you have actually fulfilled the legal requirements. It's about awareness raising. You have checked it and you have decided what to do about it. And then you have an action plan and of course you can print the action plan. And you can export report. And that is the paper that you need that is actually enough to be secure against prosecution. You have done your risk assessment. Of course nobody can sue you for making a mistake in there. You cannot really prevent that. But at least you have taken the step to analyze your situation. And by now 15 countries in the EU are accepting such an online assessment as a valid replacement for the classical auditor that comes to your company. It makes the assessment for you which would cost you 20,000 euros at least. Here's an example for training material. When you have identified your risks you actually have already all the information to make a training. Because you have a description of your risk, of your problem. You know you have it. You have some good practice information added. What can I do to resolve the problem? You have put in your own measures. So you can also export slides, PowerPoint slides from that. Or make an online presentation. And there is a little space where you can actually put a little bit of text on your own if you want to personalize your slide. And the risks have also good imagery usually attached. And none of that you have to do yourself because it comes from a sectorial organization. So here you can see the slides and there is a button somewhere. You can press a button and then it will do an online presentation right away. Or you export it of course. The whole thing works mobile. It's responsive. It has been designed mobile first. And the idea is that you are actually going to your plant, to your factory. And you're going through and do I have that problem? Yes, no. Do I see it somewhere? So you can make the assessment in situ, in place. So, and the whole thing is called Euronline Interactive Risk Assessment. And now I come that there was a long introduction. Now I come to the point what has that to do with Plone. What is my point with that talk? Euron runs on Plone 5.1. It has not always done that. It started with Plone 2. It uses dexterity. It was one of the first projects adopting dexterity and tiles back then. Seco Alchemy on Postgres. It has a Plone CMS to create these templates. They're questionnaires basically. And perhaps somebody still remembers Nuplone. That was a Plone 2 skin back then. That's what we still use for the editor interface. And it uses Plone and Seco Alchemy for the front end to provide high performance for many users in parallel. So filling out the form is done in an optimized way. More or less we have skins that write and directly into the relational database. That's the Nuplone interface. That old is it. But it works fine. So that's where you basically create your tree of modules and questions. It's a questionnaire designer if you want. It supports versions and you can work on your questionnaires and then publish them once done. And what's so special about this? I want to make a point against the people who say Plone is dead and doesn't evolve so much because I like it as a very stable platform. That has actually 11-year anniversary now. I made that presentation last year when we celebrated 10 years. It was started in the Netherlands as a tool for exactly that purpose but very quickly was adopted by the European Union, by the EU, the occupation of safety and health agency who said this is something we have to provide to our SMEs because we cannot just put legislation there and expect them to fulfill it. We also have to support them doing this. It was rewarded in the Steuwer Report to reduce bureaucracy. And since then got an upgrade to Plone 4 and the new UI in 2015 finally. In 2018 we have extended that for Daimler also to be used in the enterprise market because they also noticed that it's not that different to our custom solution that we're using internally. Big corporations have to do it also all the time. They have all the same problems and it's insanely expensive for them to do. And if we just can attach that to their Active Directory, user management, single sign-on, whatever they have and represent department hero keys to assign tools, then they can use it right away. So it was really only enterprise functionality and now they are launching it in a few weeks. And they are now going to present it to the workers' compensation boards that I mentioned in the beginning and which were very opposed to the EU approach many years ago because the EU is only giving us new bureaucracy. The setup survived in a highly political environment. Workplace safety is usually negotiated between tripartite structure. There are governments making rules. There are these trade unions protecting the workers and there are employer organizations protecting companies. And they are all fighting on this topic. The governments are making rules that they think everybody has just to apply to, to comply to. The trade unions want to protect workers' rights. They don't want to be surveyed. They don't want to be controlled too much. And their employers always have a cost in mind. Like, what will that cost me? And just more bureaucracy will just mean more money expenses. Then, of course, these conflicts between national and Europe. If Europe introduced rules then many countries say, oh, more rules from Europe. We don't want that. And different member states, of course, don't do it in the same way. So there are subtle differences between the needs. One wants a more complex evaluation. The other one, a less complex evaluation. One wants to put legal references more before that. And that has been very, very easy with blown by having a core system and then just adaption policy acts. We have a pluggable integration of different evaluation algorithms, for example, per country or even per sector. And it's very easy to report on the detailed usage thanks to the relational database where we are storing what people are actually doing. Because in the end, in that environment, it's always what you report that gives you the money to evolve. Then, of course, multilingual. Big, big issue. Yes, we can do all the languages. It's not a problem. What it's not a problem. What will it cost you? You have to translate a PO file text a day. It is running in 15 languages at the moment. And there are rudimentary translations for the other nine of the EU, which are gradually being adapted to Germany currently. So we soon have 60. And it has survived probably because it's not cutting edge software. But we have concentrated on a sensible set of functionality that does the job. And we have continually put focus on keeping it very easy and simple to use. So the front end is very quick. And easy. There's not a lot of stuff to configure. It is very easy to reuse existing surveys that you have filled in. We could always upgrade security was an important issue. The established upgrade, parts of blown were fantastic, where other companies just have to redo their PHP systems again and again because the adoptions didn't work. We did not have any problems. The robust hotfix and versioning policy make it very planable because on an EU context, I cannot go and say, there's a hotfix. I need a new contract. We need to plan for the six months ahead. That was powerful. And that shapes a lot of trust in the software. And we did not have any stability issues in 11 years. No memory leaks, no crashes, no unmotivated restarts. A few of the really big institutes are using that. TNO is the biggest research institute in Europe, actually. And I think there's even a ministerial del volto. So the Italians are doing a big deal and they are adding more features even this year. I think Italy is planning to roll it out for all sectors and all of Italy next year. Yes, no, there's a lot of interest at the moment. And of course it's free. It's called euphory. The original tool in the Netherlands was called Rie. And it's, when we did it for the EU, it was the EU4Rie system. So euphory is the core system. There are adoption, eggs, osha.eure. That's the EU edition, if you want. TNO.euphory is the Dutch edition with subtle changes. And there is a documentation package for that in English. And you can get it. And the interesting thing is it is actually agnostic to the content. So we have another private company who is using it to go to kindergarten. And they have to audit kindergartens. So they make their dedicated checklists. Different kindergarten types have different requirements. So they go there with a mobile phone and check whether there are these protection thingies in the doors so that you cannot put your fingers in, stuff like that. And then they come back and produce a report. It's no need to make. Yes, this also kind of workplace safety. But you could also do that to check whatever you want. Safety is a very general thing. And that's EURON, online interactive risk assessment. 11 years. And there is no end to see. I guess we are operating that for another 10 years at least. We just want another contract for 5 years to support it. I don't think that the EU is stopping that project. Any questions? Yes, on OERAProject.eu. That is the official site of the project. The agency for safety and health at work. And where they give you all the surrounding information on that project. And that is the presentation for the EU edition. And the EU edition is hosted by the EU. And any member country can use it if they want. They get an account. They can account for the country. The country can then create accounts for the sectors that are interested. Which then get an account. And then they can go there and create their surveys. Yes? Is it running on one service or does every country has it on instance? We are actually serving it on one instance for the whole of Europe. With a shitload of amount of surveys. I think we have 67,000 surveys on there. And it's one virtual machine. Good question. I don't have direct access to the statistics. But we have, I think, a few hundred active editors who are editing once in a while. But as a classic installation that is not the main traffic. And we have at times thousands of users filling in these checklists. But the overhead is minimal because more or less they click next. And a few bytes are saved directly to the relational database. So we can do a lot of concurrency. And it's all technology. And it is, by the way, now on clone 5.1. So upgrades. Whenever a problem. Thank you. So now we go to the end.
This is a case study of a large scale software that has been in operation for over 10 years and is still expanding. The latest addition is coming from a big automotive OEM and completes the chain from risk identification to staff training. This project demonstrates that Plone projects are very well maintainable over time and can adapt to new challenges. As a result there is now a framework to assess thinks - not necessarily risks - and act upon them. Maintainability, flexibility and modern technology don't necessarily exclude each other.
10.5446/54085 (DOI)
I'm very pleased to introduce Professor Samson Abramsky from the University of Oxford. Samson is a leading expert in several different areas of theoretical computer science and today he will talk about fundamental aspects of quantum theory from a geometrical and logical perspective. Okay, thank you very much Olivia and thanks to all the organisers for inviting me to speak at this meeting. It's a fascinating meeting. One thing I've picked up is a kind of theme that many of the talks sort of take Topos theory into and relate it to other areas. So I guess we heard on Wednesday about sort of connections with wider areas of mathematics. Yesterday with the talks of Daniel and Jean-Claude we heard about connections with neural networks. And today I want to talk about connections with some fundamental ideas in quantum mechanics. So we all know in some sense that quantum mechanics implies a fundamentally non-classical picture of the physical world and one of the clearest ways, the sharpest ways that this non-classicality is expressed is in the phenomena of non-locality and contextuality and the famous results of John Bell, the coach in Specker theorem, the sort of most famous articulations of this. Now the main sort of message I want to convey in this talk for the interest of this audience is that the mathematical structure of contextuality and mathematically non-locality is a special case is fundamentally sheath theoretic. It's fundamentally about the passage or obstructions to the passage from local to global. So the non-existence of classical explanations for quantum phenomena corresponds precisely to the non-existence of certain global sections. And this leads to both logical and topological descriptions of these phenomena very much in the spirit of Topos theory. So we'll use more or, there is times more or less explicitly a sheath theoretic language and clearly the whole thing is in the scope of Topos theory. And a nice point about this is that it allows standard constructions which witness these results in which you find in the physics literature things known as, for example, coach in Specker paradoxes, Hardy paradoxes and so on. There are many paradoxes are frequently used term in this area to be visualized in a very direct way as discrete bundles. And so there is a kind of topology here and in fact the non-classicality appears exactly as a logical kind of twisting of these bundles, which is very directly related to classical logical paradoxes. But also on the other hand, there is topology there and it's witnessed by the non-vanishing of Coma logical sheaf invariance. So this harmony between a logical point of view and a topological point of view is very much I think in the spirit of Topos theory and at the same time it's also strongly connected with probabilistic ideas. And both quantitative and qualitative and discrete and continuous features arise naturally. And again, as I was saying, I think the very much in the spirit of Topos theory to provide a home for all these different aspects. Excuse me, Samson, we just see a page with the title overview. Is that normal? Oh my goodness. No, they're at the live. I've been seeing text. Can you see my text? Yeah, yeah, yeah. Everything is there, right? So this is the thing I just said. So you can now see the text as well. Okay, very good. All right. So quantum context. So yeah, let me know if you're not seeing, I mean, I've got a lot of these pauses. So let me know if you're not seeing text. There should be text all the way. All right. So so more specifically, let's focus on quantum contextuality. So I guess everyone's heard of Bell's theorem. I don't know how many people are sort of familiar and can give a, we'll see that very shortly. So we know that quantum mechanics is weird, as is often said, certainly doesn't conform to our classical picture of reality. And this has quite profound implications indeed for our conception of what reality is. And also in terms of applications for the possibilities for information processing. And we'll see something about that as well. So it has both a very foundational, but also potentially a very important technological application as well. So what is contextuality? So I like to encapsulate it in the following slogan. In a nutshell, contextuality is where we have a family of data which is locally consistent, but globally inconsistent. And you see immediately from this phrase that something about the passage from local to global, it should be sheath theoretic. And as we'll see, it is sheath theoretic in nature. So as a sort of helpful initial analogy, suppose we were taking pictures of some building and we had a collection of pictures, this is like our family of data. And we see that each individual, maybe our cameras and sort of isn't able to take a picture of the whole structure, but we can take pictures of parts of it. And we see that each part looks okay. And even the places where they join up, where they are but, are consistent with each other. So this is a locally consistent family of images. But we would naturally think that these are all just parts of some coherent whole. There's an actual building out there. So we try and put them all together. And we see what we see here is the famous Escher ascending and descending the sort of visual paradox of a staircase that's either always going up or always going down, depending on how you look at it. So here is our global inconsistency. Incidentally, as a side remark, this very figure was suggested to Escher by Roger Penrose, who studied with his father visual paradoxes and even associated a co-homology to these visual paradoxes, quite akin to what we'll be talking about later. Okay, so, and then a very brief recap on quantum theory. I'm not going to delve into this and I understand that people are not coming from physics, this in mostly in this audience, but I really, is it just to reassure that nothing more is needed than some linear algebra, because most quantum information and computation theory, and also indeed foundational results relating to non locality and contextuality takes place in finite dimensional Hilbert space. So finite numbers of qubits or maybe q dits, if we allow more alternatives. So a qubit is just two dimensional complex vector space, that's the space of qubits. And in general finite dimensional Hilbert space is just C to the N. And we can just regard it as a complex inner product space. So operators are just complex matrices, and the adjoint of a matrix is its conjugate transpose. So we can understand it very simply. Now what is a state of a quantum system in general, it's a density matrix, which is a positive semi definite self adjoint, probably trace one matrix. And in particular pure states are just rank one projectors. So we can think of it as a unit vector, but we can just think of the one dimensional subspace that it generates. So when you prepare a quantum state, the representation is just this simple object of linear algebra. And when we measure projective measurements are described by self adjoint matrices. And the idea is that the eigen spaces of the matrix in its spectral decomposition correspond to the possible outcomes of the measurement. So and the basic rule that lets us compute probabilities that give the predictive content of quantum mechanics is the Born rule. And the Born rule says that the system in state row will take the Ith possible outcome of the measurement represented by the eigen space P i with projector P i. Yes, so when I said eigen spaces, I was writing the projectors here that correspond to those eigen spaces. So the Born rule simply says that the probability of getting outcome i is just the trace of row P i, where P i is the projector onto the Ith eigen space. And in the case of pure states and rank one eigen spaces, this just reduces to computing inner products of complex vectors. So there's just some simple linear algebra behind it when I talk about, you know, various observable behaviors that we'll discuss being realized in quantum mechanics. We mean that we can find matrices of this kind, which will under this Born rule produce this observable probabilistic behavior. Okay, well, I'm now going to sort of immediately come to some concrete example. In fact, quite soon we'll be proving some version of Bell's theorem. But so the usual kind of way one talks about this in a physics kind of way is that you prepare a state somewhere. Remember, a state is represented as we just said by one of these density matrices. We have measurement devices. Alice's measurement device, Bob's measurement device. We choose some measurement, perform the measurement on our part of the state that's been prepared and observed some outcome. But rather than talking in these terms, there's another language which is very popular in the quantum computation, and where we can even strip away this kind of physics speak and talk about a certain kind of game, a non-local game or an Alice Bob game. So in this sort of game, we have two, and this is mathematically isomorphic to the sort of experimental setup that we were just looking at. So in this game, we have two players, Alice and Bob, and they play cooperatively against a verifier. But the constraint that's placed is that Alice and Bob can't communicate while the game is being played. So that's the significance of this wall between them. So they're put in different rooms. We take away their mobile phones. We close the rooms in Faraday cages. There's no communication between them. Of course, ultimately, they should be space-like separated, so there's no time for light to pass between them during a round of the game. The verifier supplies Alice and Bob with inputs, and they have to return outputs. If the outputs meet a certain winning condition, then they've succeeded in winning the game. So really, the idea is to see what kind of coordinated behavior Alice and Bob can achieve given that each doesn't know which input the other has received. So the most famous example, which is very closely related to the usual kind of Bell's theorem setting, is the XOR game. So this is what we just said. Verifier chooses an input for each of Alice and Bob, and we assume that the verifier chooses a uniform distribution for the choices by the verifier of the inputs, the convenience. Alice and Bob each have to choose an output, as we said, they're not allowed to communicate. And the winning condition in this XOR game, so this funny-looking thing that says that the exclusive, so these are all Booleans, zero or one. And the condition is that the exclusive OR of the outputs should be the conjunction of the input. So exclusive OR is actually just addition modulo 2. So what does this mean? If you think about it, there are three cases, I mean, the conjunction is only true if both X is one and Y is one, and in all other cases it's zero. So in those three other cases, the exclusive OR has to be zero as well. So that can only happen if A and B have the same value. In other words, if A is the same as B, if you think of addition modulo 2, zero plus zero is zero, one plus one is zero. And the final case, so those are the first three cases we see here, and in those cases the outputs have to be the same, they have to be correlated. And the final case where this is true, that's both X and Y are one, then this has to be one, and that can only mean that A and B have different values, they're anti-correlated. So the probabilities of the winning outcomes given the various inputs are given by this expression here, and the success probability, so a strategy, a probabilistic strategy is just given by a table of conditional probabilities like this. And if we assume uniform distribution of the choices of inputs, then the success probability for the strategy is just given by this expression. So that's the XOR game. So how well can we play this game? Well here is a strategy for the game. So it's a table of 16 numbers, each of these rows is one of these conditional probabilities conditioned on this choice of inputs to Alice and Bob, and here we see the probabilities with which the various outcomes are obtained. I mean we allow Alice and Bob to use randomization techniques as they wish, just subject to this constraint that they can't communicate during the playing of the game. So what this entry highlighted here means, for example, is if the verifier spends Alice a zero and Bob a one, then with probability one eighth, Alice outputs a zero and Bob outputs a one. So I hope that is clear. Okay, so we see a table of 16 numbers, what can we learn from it? Firstly, we'll learn a bit later that there's sort of a pre-chief here and there's something of that kind of structure, but what we see immediately is that if we take the expression we looked at here for the winning conditions, so we're looking at the correlated outcomes in the first three rows and the anti- that's the outside columns, and the anti-correlated outcomes on the final row, that's the two inner columns, if we sum all those probabilities and divide by four to normalize, we get a winning probability of 0.81. Well, is this big news? Yes, it is big news, or at least if we believe this thing can exist in the world, it is big news because classically the optimal classical probability is three quarters. If the world behaves as it is supposed to behave according to classical physics, classical probability, then the strategy couldn't exist. I mean, where do I get this three quarters from? Well in one direction it's very easy to see that we can at least get a probability of three quarters. For example, suppose Alice and Bob decide beforehand they will both always output a zero, or they will both always output a one, either choice is good, then you see that in three of the cases they win because if the verifier sends them any of the first three cases, i.e. not both one, then the coordinated response, the correlated outcomes are actually winning outcomes. So, if they put all their weights on those outcomes, so just a column of ones here and everything else zero, then three quarters of the time they would win. So that's a very dumb strategy which achieves a probability of three quarters of winning. That is not only what you can do with a dumb strategy, that's all you can do with classical means. That is less obvious. This uses and is essentially expressing the idea of Bell inequalities, which is the fundamental idea behind Bell's theorem and the fundamental method in the whole of quantum foundations and quantum information, and the subject of many experiments and of things that are where people are of the emerging quantum technologies. So this is a very fundamental idea that it's giving an actual limit to what can be achieved by classical means and which this strategy exceeds. And since the table I just showed you exceeds this band, and moreover, it's quantum realizable, meaning that we can find a quantum state described by an entangled pair of qubits and appropriate quantum measurements. So we just find these matrices and according to the use the Born rule to compute what they do from the table, and this would give you exactly this table of numbers. So quantum mechanics predicts that we can construct such a strategy, and as we'll shortly see this is confirmed by experiment and now already being woven into quantum cryptography, there is kinds of quantum information processing scenarios. So this is a clear case of using quantum resources to yield a quantum advantage in an information processing task. Okay, so let's discuss Bell inequalities. And this takes us to a pioneer of both probability and logic, namely George Boole. So Boole had a beautiful paper in the 1850s, the 1850s, still very worth reading, and as was observed particularly by Itom Apitofsky, what Boole was discussing there is essentially the subject of Bell inequalities. Although of course he had a different motivation just looking at the fundamental question in probability theory. So the problem is we're given rational numbers which indicate relative frequencies, and if no logical relations obtain among the events, then the only constraints impose, although they each be non-negative unless the one, but if the events are logically interconnected, there are further equalities or inequalities that obtain among the numbers. And the problem is thus to determine numerical relations among frequencies in terms of equalities and inequalities which are induced by a set of logical relations among the events. And these are called conditions of possible experience. So more formally we're given some basic events and some Boole which we can think of as variables and some Boolean functions, which can be described by propositional formulae, and we're given probabilities of the basic events. And we're asking what numerical relationships can we infer from the logical relationship between the probabilities can we infer from the logical relationships between the events. So this is already a beautiful interplay between logical motions and probabilistic motions. And let's make a very simple observation which ends up giving a rather complete answer to this question. So we've given some propositional formulas, we're given probabilities for these, this fits exactly into the picture we were just discussing. And we make a very basic, so to have some logical connection, we say that the formulas are not simultaneously satisfiable, they can't all be made true. These are just propositional formulas. So in other words, any n minus one of them must imply the negation of the nth. So now using elementary probability theory, we can extract something from this. We see that the probability of the nth events described by this formula, because of this implication, must be, must be, and the monotonicity of probability must be less than or equal to the probability of this disjunction. And then by what is actually fittingly called Boole's, a very simple Boole's inequality, the first of the Bonferronian equality is no disjointness, but just the fact that the probability of a union is less than or equal to the sum of the probabilities. So we get this inequality, we get the probability of a complement is just this, we collect terms, and we end up with this inequality just under this assumption that the formulas are not simultaneously satisfiable, that the sum of the probabilities must be bounded by n minus one. So this is a very simple derivation, and we can immediately apply it to the Bell table. So here's the winning condition, the winning outcomes, winning responses of our XOR game highlighted, and we have the probabilities weighting those outcomes in our table. And so we make events that correspond to the winning conditions. So remember, this was on the first three rows that the outcomes are correlated, and on the final row that they're anti-correlated. We can describe in logical terms by these formulae. So it's very easy to see that these are not simultaneously satisfiable if you take them all together they're contradictory. So we start from A2, we can then replace it by B1, we can replace B1 by A1 and A1 by B2, and then B2 exclusive or B2 can never be true. So they can't, they're not simultaneously satisfiable. So our little proposition from the previous slide must apply, and therefore it should be the case that the sum of these numbers that are highlighted should be less than or equal to three. We have four events, so n minus one is three. But we see in fact that we get a violation of this Bell inequality by a quarter. So by the way, I mean, this isn't a contradiction in mathematics, so there must be an assumption that I've made. And you, well, I mean, it's nice to think about this, but the, I don't know if anyone wants to sort of say something here, but in fact, if you look carefully, you see that I've tacitly supposed that all these individual probabilities come from a single joint distribution. And that's really the classical assumption that there's a global event space where everything lives and everything is objectively going to happen, you know, I mean, every, everything objectively happens or doesn't happen regardless of which variables we choose to look at. And under that assumption, this goes through, this reasoning goes through perfectly and would indeed give us a bound and then we'd see that this table is not consistent with those assumptions. So we get a violation of this logical Bell inequality. And in fact, this simple derivation has a natural generalization. So given a family of propositions, we say it's k consistent. If the size of the largest consistent subfamily is k, so if they're n propositions, k out of n, and if you have a k consistent family, you can easily show the same kind of way that the whatever the probability assignments, whatever joint distribution we have, some of the probabilities of these events is bounded by k. But the interesting point is that all Bell inequalities arise in this way. So the result is that a rational inequality is satisfied by all non contextual, well, so I'm not explaining all the terms, but by all set ups of this kind in very considerable generality, if and only if it's equivalent to a logical Bell inequality of the above form. So in other words, logic is controlling the probabilistic, what can be what can happen in terms of probabilistic behavior in this rather nice way. So I think this does give a nice logical answer to Bull's problem. In fact, Ptowsky suggested that maybe in Visage there could be such a result, but I think this does give a nice answer. And it also tells us that if Bull was delimiting the conditions of possible experience, that in quantum mechanics, we have quantum conditions of impossible experience. And there's a lot one can say about that. Of course, the experience is impossible only in a certain sense, because what's important to say in understanding what's going on here is that all that we can observe on a single play of the single plays of the game are just these subsets of variables. We cannot observe all the variables together. So the fact that there must be something true of all the variables at any point is the classical assumption. And you can sort of see that this is going to lead us inevitably to something about non-existence of global sections. Okay, so is this all science fiction? Well, I mean, it's certainly a quantum theory is verified and confirmed and countless experiments and manifestations of our science and technology every day. But very specifically, these predictions were, I mean, there was a sort of history of experimental tests sort of turning back to the picture of preparing states and measuring them and so on. And this culminated just a few years ago, the first widely agreed loophole free belt tests where sufficient space-like separation was achieved between the Alice and Bob measuring stations and sufficient efficiencies of detector efficiencies were achieved so that there was a sound basis for these results. And this is a picture, if you don't recognize him, of John Bell, who did this pioneering work in 1964, and a review article by Alan Asper, who was really the pioneer in conducting Bell experiments in the going back to the 1980s, which sort of looked at these more recent teams, actually three teams that did this work, which as he says, closed the door on this quantum debate. So here you see a schematic where you have indeed a source emitting these entangled pairs of photons. Alice and Bob now become detectors and then the performing measurements. And we're looking at coincidences correlated or non-correlated as we were saying. So this is actually done in various experimental setups by several teams. So here's a sort of timeline. The formalization, in finite dimensional case, just linear algebra, then obviously with the more general Hilbert spaces and operators and so forth goes back to von Neumann even to the late 1920s and we've been using it since then. The EPR paradox already in the 1930s, Bell's theorem in the 1960s. The first experimental test still subject to these experimental imperfections in the 1980s. We already see the beginnings of quantum cryptography and of quantum computing in the 1980s. Shaw's algorithm of factoring primes in the 90s. And these first loophole free belt tests by these three teams at Delft, NIST in the United States and Vienna in 2015. 2019, quantum supremacy claimed by Google, various other. So this is to do with quantum advantage and an emerging quantum computing and technology industry right now. Okay, so let's be a bit more formal and a bit more general just to give an idea of how we can connect to the sheath theoretic language. So there's a kind of, we can have a kind of level of types which set up the structure of a measurement scenario. So I'm just going to look at finite things here. So I've been saying so we have a finite set of measurements and then we have a simplicial complex whose faces are called the measurement context. So this is telling us because the key point as we kind of already saw implicitly is that not all variables can be measured together. Only certain variables are measured together in quantum mechanics. This is because you have in general, you have incompatible measurements non-commuting observables. So those that can be measured together are collected in the faces of a simplicial complex and then we have a finite set of possible outcomes for each measurement. And we have tables that give, well, so this is just the structure of a measurement. It's saying what kinds of variables you have, which of them can be measured together and that's all. And then an actual behavior in such a setting and using, which we can think of in terms of preparing some state and performing some measurements is given by what we call an empirical model, which is a family indexed by the faces of the simplicial context, complex of probability distributions over these joint outcomes. So these are the rows of our table as we saw earlier. So the simplicial complex here just has these, it's just a graph where we have these binary contexts. And we'll see more of this later, but we can sort of visualize this in a very nice way as a kind of discrete bundle. So we'll see more of that later. Let me just say that a key point here in the sort of the underlying assumptions is that, which is really enforcing this idea that, well, that we're not, we can't, our choice of context cannot be, is not itself information that influences the outcome. So in other words, that if you have, that marginals are well defined essentially. If you have a smaller set of variables, then whether you marginalize from one larger face or from another, you get the same result. So this is a physical principle generalized though signaling in the context of Bell scenarios, which relates to sort of relativistic constraints, or in contextuality, it's called the no disturbance condition. So then we say that an empirical model as we just defined it is non contextual, if there is a joint distribution over the whole set of variables, which marginalizes to give us all these observable behaviors. And that is we can glue all the local information together, the things we can directly observe by performing measurements into a joint distribution from which all the local information can be recovered. We call such a thing a global section that once we put it in categorical language, chief language, it is a global section, as we'll see in a moment. And if no such global section exists, the empirical model is contextual. Thus, this sort of makes more precise our previous slogan that contextuality arises where we have a family of data, which is locally consistent, but globally inconsistent. And the import of Bell's theorem and similar results is that there are empirical models, testable behavior, and indeed experimentally testable behavior arising from quantum mechanics, which are indeed contextual. So just to make a more categorical formulation, given one of these scenarios, we have a simplical complex that we can define a pre-sheaf where the firstly this we have the sheaf which simply collects all the outcomes for each, as it were, for each context, each face of the simplex, the simplical complex, sorry, for each simplex in the complex, we have the local section giving an outcome for each of the variables in that context. And then we compose that with a discrete distributions monad on the category of sets. This is just to keep things simple. The discrete distributions monad is just the reduction to the discrete case of the jury monad if you happen to have encountered this, which is a basic, going back actually to ideas of Laudier, which is a basic way of categorifying probability theory. So it's a very useful monad that comes up a lot in many applications. So this is a pre-sheaf and the restriction for this pre-sheaf is exactly marginalization. And then we see that an empirical model which builds in this no signaling condition, this physically significant condition is just naturality given that the restriction is marginalization. The fact the restriction is marginalization just for, I mean, this is obviously here, trivially, that we just have projections in this event sheaf. And then just from the factorial action of this monad, we recover that we get marginalization as the restriction operation. So everything fits in a very, very neatly in a categorical formulation. And there's also a lot of topology at work here. Even in this finite setting, there's a lot of topology at work here. And I'd like to sort of firstly show you this in a visual form and then say a little bit about co-homology invariance. So here is a way we portray these simple scenarios like in that Alice Bob game or Bell scenario that we were looking at earlier. We have a base. So we have a kind of bundle structure where we have a base with the variables and we put an edge between two variables if they can occur in the same context. That is, they can be measured together. So Alice can measure any of her variables together with any of Bob's variables and vice versa, but Alice can't measure both her variables at the same time and Bob can't measure both of his variables at the same time. So that's why we have the edges that we do. Of each variable in the base, we have a fiber of the possible outcomes. So in our case here, these were just binary outcomes in each case. So we have these little fibers sitting above the variables in the base and a local section and the possible local section. So here I'm abstracting from probabilities just to take those things that have non-zero that are in the support of the distribution that have a positive probability that could happen. And so we have edges that indicate those observable events that could actually happen. And the global section here of possible events would be just a path that goes around all the fibers in a consistent way, a closed curve that comes back and assigns a unique value to every variable globally. So that's what the global section is. Okay. So as we say, so I'm going to look at another famous example from the quantum literature, the Hardy paradox as it's often called. And as we were just saying, and the important point is that the contextuality here can be seen at the logical level without even invoking probabilities. We only need to distinguish between what gets probability zero and what gets positive probability. So I put ticks where there's a positive probability, that's if you like, it's possible and across for those with probability zero, namely impossible. So we have a basis before it's the same scenario, the same shape as the Bell test, but it's a different kind of model. As we said, we have these are the compatible observables, those are the fibers. And then you see on the first row, everything is possible. So we put in all these edges on the second row, only three things are possible. So we put those three edges in and similarly for the other rows. And now the point is that there are some global sections in this table. Here's one. So that's a consistent value consistent with what lives in this fiber. So the edges have to be things that we allow as possible events. So there is now some local sections, but suppose we took a different choice of a local section here. The question is, can we extend it to get a global section? And in this case, you see, we have a problem. We can go forward and we can go forward and we can go forward. But the only choice we have doesn't take us back to where we started. There is no closed curve of this kind. And that corresponds exactly to the fact that there's no way of extending this. There's no global possible event which could account for this observable local event. And this is already the signature of contextuality. So and here, well, we see, in fact, we could distinguish in this way different strengths of contextuality. There's the basic probabilistic version that we saw earlier. Hardy illustrates what we call logical contextuality where there are some global sections, but some local sections that can't be extended and the extreme case is we call strong contextuality is illustrated by this, as it were, discrete mebius strip. Now this discrete mebius strip is exactly what corresponds to a famous construction in quantum foundations, quantum information, namely the Popescu-Rawlik box. Interestingly, this is something that can't be realized in quantum mechanics. It's even more weird than quantum mechanics allows for. But on the other hand, there are quantum realizable phenomena, a little bit harder to draw, which exhibit the same kind of strength of contextuality. The point here is that wherever you start, you can never get back to, you never go around and get back to where you came from exactly. You have to go around twice. So there's no univocal unique assignment of values that's consistent and extends any local section whatsoever. So this is maybe a strip. It's also, in fact, well, it's related to a lot of things. Let's see how it's actually related to the Bell table, the XOR game, the subject essentially of the Bell test because those experimental Bell tests were exactly confirming this kind of setup. So really the winning positions in this game, corresponding, I mean, if we just, you know, only focused on those, that would give us exactly the nodes we see in this mebius strip. So that was what we were doing all the time. And remember that the basis of our derivation of the Bell inequality was that the corresponding propositions were logically inconsistent. So we already see the sort of elements of a beautiful connection between physics, probability, logic, and topology. So let's say a little bit more about logic. So in fact, you know, there's the famous Lyre paradox, the sentence that says I am false or this sentence is false. By extension, we can take Lyre cycles. This is the sort of thing that the Gissions study where we have a sequence of sentences, which each says that the next one is true, but the last one says that the first one is false. So and these Lyre cycles can be modeled by systems of equations. So the point about doing these cycles is that we see that each of the statements involves a subset of the variables. And when you each individually or in fact any n minus one of them are consistent, but if you take all of them, then you get an inconsistency. And in fact, up to rearrange, but the Lyre cycle of length four corresponds exactly to the PR box. And the usual reasoning to derive a contradiction from the Lyre cycle corresponds precisely to the attempt to find a univocal path in the bundle diagram. And of course, this can all be discussed much more generally, but I think this already shows vividly the connection. And for those of you sort of familiar with logic, one of the famous results of first order logic is Robinson joint consistency, which is an equivalent of the Craig interpolation lemma, which says that if you have theories over different languages, which are consistent modulo their common sub language, then the union of the theories is consistent. So this is, you know, you can turn this around into Craig interpolation. Now this is, it says that two compatible theories can be glued together. They're compatible in the sense that they agree on their overlap. They don't disagree at least on their overlap. In this binary case, local consistency implies global consistency. What you will never see in any logic book is something that goes beyond the binary case. And that's for the very good reason that if you go beyond the binary case, it fails. And actually you get a minimal counter example just in even propositionally by taking these three theories and you see that they're locally consistent in this pairwise consistent, but jointly inconsistent. This is actually or again, something that occurs famously in the quantum foundations literature. This is the Specker triangle from his 1960 paper, preceding the famous paper with Simon Cochin. Okay, and then a little bit about the co homological characterization. There is topology here and we can witness contextuality by co homology. So I mean, I'm just gonna, I'm not gonna go into details on this, but I mean, it's really fairly simple. We're just witnessing the non existence of global sections. So this is meant to be sort of a support pre-chief. And we can, we take a relative co homology, so we focus attention on one context. So this is like starting with one of the local sections and seeing what happens. And we can, we can assign to each local section here an element of the first group in the Czech Co homology, which we work with, because it's easy to compute with. And in fact, gamma is really just the connecting homomorphism of the long exact sequence. So there's some choices here and these are exactly the choices you make in the, you know, unessential choices you make in proving the snake, I'm not saying. Okay, so then, and then the basic results are indeed that, you know, as we would expect that the, the co homology obstruction vanishes is equivalent to there being a family which extends the given local section. So if the model is extendable in the sense we were just saying to start from a local section always go to a global one that extends it, then the obstruction vanishes. So if the, so what this is saying is that non vanishing of the obstruction, which was the quantity we just defined there in the first co homology group provides a co homological witness for contextuality. So now this, this sounds perfect, it's great, but of course the sort of drawback is that to get to get going we needed to abelianize by taking not the supports themselves, but the free, free the module that they generate that we took over the integers. So we're abelianizing and we're we have these integer coefficients. And because of negative coefficients in code chains, you know, sort of weird things that wouldn't, that we don't really care about their are false positives. Nevertheless, we can effectively compute witnesses for contextuality in many of the cases in the literature. And in cases where the outcomes themselves have a module structure we obtain very general results so called all those as nothing proofs, which actually counts for most of the contextuality arguments in the quantum literature. And in particular we can find large classes of concrete examples in stabilizer quantum mechanics and we have a complete characterization of contextuality there. Now there were these counter examples of my student Giovanni Caru, that's a beautiful work in his thesis. And he has a paper I'll mention at the end which gives a refined co homological criterion, which covers the vast majority of cases kills all known counter examples and is conjectured to be complete in fact. And following our work Robert Rausendorf, leading figure in quantum information and she had okay have developed a related co homological treatment of contextuality. There's some of what they've done. My students even Osnes has shown that their work also sort of falls under the scope of the chief co homology invariance and okay, she had okay in particular is a lot of work in applying topological ideas in contextuality and content computation. Okay so I hope I still have 10 minutes left is that right? Yes, yes. Okay, thank you very much. So yeah, okay, so that's that's something about co homology. I mean, if you want to I'll give some references at the end. I want to turn to a sort of some geometry, some another different aspect. This is a multifaceted subject. So another facet is convex geometry, which is also very rich here. So we can if you say here we think of these probability tables. So think of them as vectors just they're just vectors of real numbers of probabilities sort of laid lay the table out as a vector in a high dimensional R to the end. And then what we find is that the spaces of these probability models under various assumptions form natural convex bodies in this Euclidean space. So in particular because we have bell inequalities linear inequalities which which bound all the possibilities for the non contextual models they form a polytope. And then on the other hand if we only impose the kind of the no signaling kind of conditions the marginalization conditions, those are also linear linear equations linear in a linear inequalities because you also have a non negativity constraints. So they also form a polytope and these things can in principle all be found by linear programming although the linear programs get big very rapidly of course. And then sitting in between these two the non contextual case and the no signaling case we have the quantum set those things that are quantum realizable by using quantum states quantum measurements and computing probabilities using the born room as we mentioned. It's a convex set but it's very definitely not a polytope. And we can relate the various hierarchy of contextuality that we've mentioned to the geometry of these polytopes. So obviously the things that the interesting things from our point of view are those that lie outside the non contextual polytope that violates some bell inequalities. And the so that the whole space of contextual models lies in this sort of area here but the most interesting of course for us are the quantum realizable ones which lie in this quantum body cube. There were strong contextuality in general lies at the vertices or at least the faces of the polytope containing only contextual models and the in lower dimensional subspaces and the logical contextual logical contextual things like the hardy paradox in faces of this polytope. So we get a high a strict hierarchy in fact as we were saying of these different kinds of contextuality and this classification in turn feeds into things we would like to do in relation to quantum advantage. As an interlude I can't resist mentioning that although you know you may think everything is finite and it can't be that hard that there's a huge amount of complexity here. Just consider the question given a finite probability table as we've been showing a finite table is there a quantum realization? Is there a quantum state and measurements which give rise to it via the Born Rule? If we fix the dimension of the Hilbert space this reduces to the existential theory of real closed fields. It's a finite search problem although we're searching for complex numbers but searching for complex but a given a fixed number of complex numbers which really means a fixed number of real numbers and that isn't so bad because we know already from Tarski and then as it's where we need existential queries from later results that this is a decidable theory and in fact decidable in peace space. So at least it's decidable albeit of a high complexity. If we ask for realization in any finite dimensional Hilbert space so we're no longer bounding the sort of number of complex the size of the matrices we need to find then this is undecidable. And moreover rather fascinatingly there are finite tables which are realizable in infinite dimensional Hilbert space but not in any finite dimensional Hilbert space and this already comes from beautiful results by William Slopstra published recently but going back a few years earlier where he has a beautiful reduction to group theory and computational problems in group theory one can use the Higman group to show this latter fact for example. And even more spectacularly we have the recent result with a great title if you're a Mipstar equals RE the result of the gene at a Jan Vidic right and UN which is simultaneously a major result in complexity theory quantum foundations and mathematics. So Mipstar so we have this paradigm and complexity of in the interactive proof of paradigm which is famously equivalent to p-space and even if you if you give the prover quantum resource but you only have one then it's still p-space by a famous previous result but if you allow multiple quantum provers sharing entangled space then it allows all semi-decidable problems to be represented. So for example the halting problem provability of statements in piano arithmetic and so on so it already becomes really strong. So and as consequences of this it refutes the Teroson conjecture which really says in a way that tensor products at least in infinite dimensions are not fully general to describe you know they're no longer I mean so if you have commuting subalgebra's two subalgebra's of operators where one subalgebra everything one subalgebra commutes with everything in the other in finite dimensions you can always represent this on a tensor product and this is no longer true in infinite dimensions this is essentially the Teroson conjecture and the very famous mathematics conjecture of Al-An-Kan 1970s is also refuted by this result that the connection between that and the Teroson conjecture was already known. So there are some quite very deep and remarkable phenomena lurking in these settings already. Okay so to say I'm running out of time. Yeah so I just say that you know this does reach into issues of quantum advantage and using these tools that we've been mentioning to derive a general results about quantum where you can gain where you can do something using quantum resources that you provably can't do using classical resources and this is you know it's an emerging technology which it's pretty clear is going to have a major impact on our scientific world technological world and beyond and we still have we have remarkable examples but no general theory so we're beginning to develop some elements of this or that's the aim so here we have a way of measuring contextuality and for a large class of problems we have an inequality that shows that to get a better a better performing algorithm you need more contextuality and in various cases you need some of these stronger forms of contextuality that we were mentioning. So this is and really the sort of a major current topic of interest is where the line in the sand can be drawn separating quantum advantage using these non-classical phenomena from efficient what is efficiently classical, classically simulable and there's there've been many surprises in both directions. So a very promising recent line of work is in shallow circuits, a breakthrough work by Bravgi Gossett and Koenig which gives an unconditional separation most most separations you'll see are just based on the best you can apparently do or or in theoretical work on using conjectures about separation of complexity classes but they give an unconditional separation for this shallow circuit class and the idea of the shallow circuit class is that the non-locality as we saw earlier is weakened to a bounded locality so shallow circuits mean the circuits can get arbitrarily big but they can get wide but not deep. Each circuit can only have a bounded number of sort of previous gates or each gate can only have a bounded number of gates feeding into it previously computed. So this means that in a classical circuit those things can't communicate I mean then after we say it's like separated but they can only communicate through the structure of the circuit but in a quantum case you because you have this correlation behaviors as we saw earlier you can achieve a provable quantum advantage and although the sort of absolute inequality is kind of attenuated by this bounded communication that you have in the circuit asymptotically the advantage witnessed by the Bell inequality violation is recovered. So this really leverages all the tools we've been discussing to prove a striking result about what could be achieved with quantum advantage. Of course the point is rather than a single finite case we now have a whole family of instances a circuit family and so we get an asymptotic calculation there and the same ideas hopefully can be transported to other computational settings and there are a lot of other further developments there are remarkable, remarkably good connections with things that have nothing to do with the quantum mechanics for example in relational database theory and even in linguistics and so on and something it's been a pleasure for me to find out recently is there are strikingly close connections to work done by Daniel and his students and collaborators from I think quite different motivations essentially but the mathematics is strikingly reminiscent so something we hope to understand better. Let me just give some references to papers that will be published in various places but you can conveniently access them on the archive and they want to get more details of anything I've been talking about you can find that there and here are some of the people I've had the pleasure of working with on these things so thank you very much. Thanks so much for this marvelous talk.
Quantum mechanics implies a fundamentally non-classical picture of the physical world. This non-classicality is expressed in it sharpest form in the phenomena of non-locality and contextuality, articulated in the Bell and Kochen-Specker theorems. Apart from the foundational significance of these ideas, they play a central role in the emerging field of quantum computing and information processing, where these non-classical features of quantum mechanics are used to obtain quantum advantage over classical computational models. The mathematical structure of contextuality, with non-locality as a special case, is fundamentally sheaf-theoretic. The non-existence of classical explanations for quantum phenomena corresponds precisely to the non-existence of certain global sections. This leads to both logical and topological descriptions of these phenomena, very much in the spirit of topos theory. This allows the standard constructions which witness these results, such as Kochen-Specker paradoxes, the GHZ construction, Hardy paradoxes, etc., to be visualised as discrete bundles. The non-classicality appears as a logical twisting of these bundles, related to classical logical paradoxes, and witnessed by the non-vanishing of cohomological sheaf invariants. In this setting, a general account can be given of Bell inequalities in terms of logical consistency conditions. A notion of simulation between different experimental situations yields a category of empirical models, which can be used to classify the expressive power of contextuality as a resource. Both quantitative and qualitative, and discrete and continuous features arise naturally.
10.5446/55519 (DOI)
So the next talk is by Axel Osmonde and he will talk about the over topos at a model. And this is joint work with Olivier Caramello. Okay, Axel. Thank you very much. Thank you very much. Thank you very much for being there as this talk where I have the pleasure to present this joint work with Olivier Caramello. So just as we go through screen. So let me first introduce my topic. So if you have a topological space, then you have a specialization order between points and at a given point, you can look at the upset or the down set of this point or the specialization order. The upset contains all points that lie above a given point X and the down set contains the point that are below a given point. And of course, this is the construction generalized to arbitrary subset of your topological space. And in this talk, we are interested in the topos theoretic analog of the down set. So recall that a groten diktopos has a category of points which are the geometric morphism from set into this topos. And also you can look at the under category and the over category at a given point. And those are the analog of the upset and the down set respectively. And of course, this generalized to arbitrary geometric morphisms where you can look at the under and over category of the home category of geometric morphism between two topos at a fixed geometric morphism. In this talk, we are interested at the over category at a given point. This led us to the notion of totally connected topos. So a geometric morphism is said to be totally connected if the inverse image part has a Cartesian left adjoint that is which preserve finite limit. And then the pair of adjoint given by this left adjoint and the inverse image is a terminal object in the category of section of your geometric morphism. And in particular, as it is quite common to classify the geometric properties of topos by the property of their terminal geometric morphism to set the function of global section, we say that a gotten diptox is totally connected if its terminal geometric morphism is totally connected as a geometric morphism. And this means that your topos has a terminal point. In particular, if it is a classifying topos of a geometric theory, then this means that the geometric theory has a terminal model in set and in fact, any arbitrary topos. So just I would like first to recall something which is already known, but which will be useful to understand what we are doing to do, which is we can construct canonically a totally connected geometric morphism at a given arbitrary geometric morphism. To do so, recall that the category of gotten diptox has power with two, which is a universal two-cell classifying natural transformation between geometric morphism into a given topos. And the codomain part of this universal two-cell, which is called the universal codomain, is always totally connected. This is in some sense a generic totally connected morphism. And actually using the fact that totally connected geometric morphisms are stable under pullbacks, we can construct, we can use this generic morphism to construct totally free, totally connected geometric morphism at another geometric morphism. To do so, we just have to take the two pullback of the universal codomain along a point if we want to compute the over topos at a point or at an arbitrary geometric morphism. In both cases, the universal property of this construction is that its point will be exactly the category of the over category at the corresponding point and for an arbitrary geometric morphism, the over category at this geometric morphism. In particular, in the case where this geometric morphism is the name of a model of a geometric theory, that is, if our topos is the classifying topos of a geometric theory, then the universal property of the over topos is that it classifies a homomorphism of T model into the inverse image of this model along arbitrary geometric morphisms. So the purpose of this talk is to provide a site description of the construction of the over topos. Because as it is done this way, it is an abstract universal construction which does not retain any information about the site and in particular the syntactic property of encoded in the model structure and the structure of the classifying topos of a theory. So we want to provide a canonical site for this construction and we will have to do in two steps. First, we will have to process for the case of a model in set, that is for a point of a topos and then we can do this for the general case for an arbitrary geometric morphism, but it will use a bit more involved technologies about stats and next category and so on. So for the set-valuated case, just recall before that for a geometric theory, you have its syntactic site where objects are formula in context. And our probability equivalence classes of probably functional formula between those formulas in context and the topology, the syntactic topology on this category is generated by the cover that are encoding the disjunctive sequence of your theory. Now a model in set is a flat for the syntactic topology into set sending a formula in context to its interpretation and we can look at the category of global element of this function with objects are made of a formula in context and the global element of this formula in context and morphisms between two global elements are just an underlying morphism between the formulas in context that may commute the corresponding global elements. So in set we are going to use an interesting property that make the construction easier which is that one object set is generating in set that is any set is a co-product of one indexed by its global element. And in particular this will be true for the interpretation of formulas in context by a model M and this will also be preserved under inverse image that is if you look at the inverse image of a model in set then the decomposition is still true after applying inverse image. So in particular so moreover if you consider a model that is a flat for the syntactic topology then covering family for the syntactic topology are sent to jointly epimorphic family which means that if you take a global element of this interpretation then there is at least one member of the cover such that this global element has an antecedent along this member of the cover. This leads us to consider the following topology on the category of global element which consists of families of all possible antecedent of a global element along a cover an interpretation in M of a syntactic cover. And this generates a topology will recall the antecedent topology on the category of global element and our main result is that the category of global element and the antecedent topology on this category of global element is a presentation for the over topos. So to prove this we have to show that it possesses the universal property of the over topos which is that we must prove that any geometric morphism for an arbitrary topos into this shift topos is the name of a homomorphism of T model into the inverse image of M. The inverse image along the global section of your arbitrary topos. So to do so suppose we have a homomorphism to the inverse image of your model. This is the same thing as a natural transformation between a flat for the syntactic topology into where the codomain of this transformation is the inverse image of your model. And this natural transformation has components indexed by object of the syntactic site. And now if you take an element of an interpretation in M of a formula in context, then you can look at its inverse image and you can take the fiber of its inverse image along the component of your natural transformation at the corresponding formula in context. This returns you an object, a fiber object at each element of M. And you can do this in a functorial way thanks to the naturality of your natural transformation and using pullback property. And this returns you a functor from the category of elements of M into G. And then a proving flatness for the anti-sident topology is something which is mostly an application of stability of co-products and epimorphisms in gotendic toposes. On the other hand, if you have a geometric morphism into the shift oposs over this site, this is the same thing as a flat functor from the category of elements for the anti-sident topology. So in particular, this returns you a familiar object indexed by a global element of interpretation. And for a given interpretation and a choice of an element, you can first compose the object corresponding to a given element and take its terminal map into one and compose it communically with the name of the inverse image of this element. And this returns you a map into the interpretation in the inverse image of M. And then you can glue all those objects that correspond to global element of a fixed sort. And this returns you a map corresponding to a given formula in context. And this is a way to construct actually a model of T in G with interpretation of the formula in context, a given formula in context is given by the corresponding co-product reconstructed above. And the homomorphism of T model into the inverse image of M just is provided as you can see there by the universal property of the co-product at each formula in context. So the reversibility of this process is mostly actually an application of extensiveness and stability in gotendic topos. So I won't be too long on it. I prefer to turn to the logical aspect of this construction. So it isn't as we know that this new over topos classifies homomorphisms into fixed model to inverse image. We would like to know what is the geometric theory, which is classified by the over topos. So to do so we have to define for this over theory. We have to define over language in which this new theory will live and an auxiliary language in which we will test with what kind of sequence we want to have in this over theory. So the over language will consist of a new sort for each global element of your model and a new function symbol for each morphism in your category of global element of this model. And also the other hand, the auxiliary language will be an extension on the ambient language in which you will add a new constant symbol for each global element of your model. And then your model is canonically equipped with a new structure, an extended structure for this extension of the language where you interpret the constant corresponding to an element by the corresponding element. And you have also a canonical interpretation from the over language to the auxiliary language which replaced in a formula all instance of three variables of the sort corresponding to an element with the corresponding constant in the auxiliary language for this element. And then the new theories, the over theory associated to a model will have as axioms all the geometric sequence in the over language whose interpretation in the auxiliary language are valid in the extended structure your model have canonically for the auxiliary language. And actually we can also prove that the over topos as we constructed it is the classifier is the classifying topos for this over theory. So now I turn to the general construction in the general case. So in general case, the complication is mostly that one is not anymore a generator in an arbitrary or tentative topos. This means that we cannot just restrict to global elements of a model. We have to consider arbitrary generalized element. So and they have more fission between them. They have more complicated structure. But at least we can in the following we will also restrict to the ones that are indexed by basic element. That is, if you have a presentation site for your topos, you will be able to at least restrict to generalized element that are indexed by an object coming from the site of presentation of your topos. But in any case, the complication will be that not only the formula of the interpretation may be able to vary, but we must also take in account the variability of the indexing object in the generalized element. So to address this problem properly, I have first to give some word about the notion of stacks and the construction of the topology. So as Ricardo told you in the last talk, there is a notion of a topology for an index category. And in particular, if you have a Cartesian stack on a gothendic topos, then you can consider the gothendic construction associated to this Cartesian stack and equip the corresponding fiber category with the giro topology, which is the smallest topology making the corresponding fibrations, a comorphism of site. And in particular, if you have a small site of presentation for your base topos, you can compose your Cartesian stack with the yoneda embedding and this return you a fibrations over the site for your topos. And you can restrict the giro topology over for this site. And actually you can give the concrete description of covering family for the giro topology as the Cartesian lift of cover for the g topology of your basis. And actually you have those defined the same topos. This is the same thing if you take the topos with the giro topology or its restriction for the unit for along the unit of embedding of site of presentation. And what is important is that you get a geometric morphism induced by the comorphism you have thanks to giro topology. So now something which was marked by Olivia recently is that if you have a geometric morphism, then you can construct canonically two indexed categories associated to it. One indexed by the domain topos, which send an object e to the category of e indexed generalized element of the inverse image part of your geometric morphism and on the other hand for the codoment topos, you have an indexed category on the codoment topos which send an object to the category of generalized element of its inverse image. And so we end what happened is that both those indexed category induce the same construction and the same category and there is a growth and deconstruction which is the comma of the domain topos and the inverse image and which is equipped with two fibrations on the left over the domain category and on the right over the codoment category. And now you can equip this comma category with the smallest topology which makes simultaneously those two fibrations comma fism of sight and this topology which we call the lifted topology is jointly generated by the two gyrotopologies associated to those two fibrations. Important to know something we would like to have is a way to restrict us to small site of presentation for the domain and codoment topos because in practice we want small site because the category I defined there is a large site. So to construct a small site if we have a site of presentation for the codoment topos then we can look at the comma where we on the right we consider the inverse image of basic object that is the compose of the inverse image with the yoneda embedding and we can restrict the lifted topology to this category and we still have a fibration over the site for the codoment topos and this restriction allow you to have a comma fism of sight. On the left you can also restrict if you have a presentation site for your domain topos you can also restrict along the yoneda embedding so on the left. And again you have an innocent way to restrict your topology the lifted topology on this comma category and the interest is that the new comma category is small and you have a canonical topology on it derived from the lifted topology. So a problem with this construction is that the new construction is a bit less pure because it's not any breaks the properties that we have a fibration because we will see that inverse image of basic generalized element are not necessarily basic but this is not very important for our proposal. So just a word on the concrete description of the lifted topology. It's a restriction of the lifted topology in this context. It will consist of restriction it will consist of families where you take a first generalized element of the inverse image of the basic element then you look at the inverse image of a cover of the object you look at the inverse image of and then you take a cover of each fiber you take a cover for each fiber of this generalized element. So now I would like to apply this to a Cartesian to the case where you consider a model of a geometric theory in an arbitrary topos. So if you have a T model for a T model into a growth and diktopos with a standard site of presentation this is the same thing as a flat for the syntactic topology into this topos and this define Cartesian stack sending an object of your base site to an object C into the category of C indexed generalized element of the the the function coding for your model. And now again you can define the the the category of generalized element of your model which is the coma category between the unedient bedding and the the the flat for coding for your model. And again you can equip it with the restriction of the lifted topology we described above and in this case it has the following presentation you take a generalized element of an interpretation of a formula in context you you look at all at at the interpretation in M of a syntactic cover of the corresponding the underlying formula in context and then you ask for covers of each fibers of this generalized element along the numbers of your syntactic cover. In some sense you ask for having a fiber wise covering of the fiber and this this is a generalization of the antecedent topology we constructed in the set-valuated case and this return you again a com of ism of site forms the coma category into the base the site for the topos in which you interpret your model and this com of ism of site returns you a shift of post and a canonical geometric morphism. And the result is that again this category this the shift of post over the coma category together with the antecedent topology is has the universal property of the over topos at the at the model and in particular the geometric morphism induced by the com of ism is the totally connected universal morphism at this model. So again proving that this topos has a universal property of the over topos is quite similar to the set-valuated case now the principal difference is just that when you will you will construct a model form a natural transformation as in the set-valuated case you will have to glue object indexed by diagram of generalized element rather than just the coproduct of discrete set of global element but this is not a very important difference actually and there is no need to be too long about that so I would prefer to finish now. So just perhaps a last remark on this construction which is that at some point for practical reasons we want to restrict us because we want site description we want to restrict us to those restriction of the lifted topology which are concrete but are a bit cumbersome and may seem a bit ad hoc but actually it's again a situation in which if you drop the site information and go in a more abstract level a more invariant level of the the growth and the topos themselves you have a more pure description in which the universal properties of your object are more evident. So now just to finish I would give a few perspective for this work first which is that actually totally connected geometric morphism and topos are a dual notion to the notion of a local geometric morphism topos and it is well known that for a geometric morphism point of a topos you can look at the local topos over this geometric morphism which is the growth and dig variate localization and it is obtained as this is a dual of the totally connected to the over topos actually this is a dual construction of the over topos and it is obtained through a formula describing it as a co-filtered two limit of etal geometric morphism and something we would like to know is if the over topos also has a similar description as a co-filtered two limit but of what kind of morphism it would be a generalization of the fact that the closure of a point is an intersection of the closed neighborhood it has. Finally I would just perhaps also there is a question to know what is the theory the geometric theory which is classified by the over topos in the arbitrary case. Perhaps this is for a notion of relativized geometric theory you don't know and finally it would be interesting to have some description of the functor which is the analog of the functor externalizing a topos into etal geometric morphism over e but for totally connected topos. So thank you for your attention that's good. Okay thanks a lot Axel for this very nice talk.
For a model of a geometric theory in a Grothendieck topos, we can construct the over-topos of this model classifying homomorphisms above it. In this talk, we provide a site theoretic description of this construction. In the case of a set-valued model, a site will be provided by the category of global elements together with a certain antecedents topology, and we can describe a canonical geometric theory classified by this over-topos. In the general case, one must consider a more complicated category of generalized elements ; an antecedent topology then can be recovered through a notion of lifted topology, whose construction can be understood in the framework of stacks and comorphisms of sites. This is joint work with Olivia Caramello.
10.5446/55521 (DOI)
I think you can start with a high octopi. Okay, I'll do that. Alright, so I've told you enough about infinity categories to get going with. I hope. So let me recall a definition of Groten-Dichtopos. This is my favorite definition because it's so compact. So a Groten-Dichtopos, it's a category E, so there exists a bunch of stuff. So you have a small category C. These are one categories. I'm in one categories now. A fully faithful functor from E to the category of pre-sheaves of sets on C, which has a left adjoint. And furthermore, that left adjoint preserves finite limits, so it's left exact. So there are the left exact localizations of pre-sheave categories is how people often say this. So you can show, it's well known, that there's a correspondence between these left exact localizations of the pre-sheave category and the Groten-Dichtopologies on C. So this actually recovers the very familiar, familiar to many people, characterization of Groten-Dichtopoi as sheaves on a Groten-Dicht site. Let me rant a little bit about this. This is well known. This whole theory, it's in the SGA, for instance, goes back to the beginning of the subject. It's not as well known as it should be, at least when I was young and learning about all these things, learning what a topos is, learning what Groten-Dichtopologies, I did not know this fact for a long time. And you've got a Groten-Dichtopology, and that seemed interesting, but kind of complicated and a little ad hoc. And I didn't understand that, well, it actually has this very simple characterization. Of course, you need to know what Groten-Dichtopologies are in order to work with us, but it's a very cute characterization. Some books on, Topoi don't mention this at all. Many do mention it, but it's not in chapter one. So I never ran into it for a long time. All right. So that's what a Groten-Dichtopos is. It's convenient to not have to mention Groten-Dichtopologies at this point, and we'll see why later. But the one reason is that we can take this definition and automatically port it into infinity categories. Oh, before I do that, let's notice that some of the properties of Topoi actually just fall directly from that definition. For instance, pre-sheaves of sets are Cartesian closed by a standard argument. You know how to write down the internal function object. And if you take one of those things, and if you have one of these left exact localizations, let's say E is actually a subcategory, you have this left adjoint, which is called sheafification. Then if you have an object Y in the subcategory, then it's easy to check that the internal function objects also in the subcategory. And in fact, it's an eternal function object there. This actually only uses the fact that L preserves products. That's the property you need. One fact about Topoi is that the slice of a Topos over an object is also a Topos. So if I have pre-sheaves on scene, then I have one of these full subcategories E, and I pick an object in the full subcategory, I get functors on the slices. The inclusion restricts to a fully faithful functor on slices. And the left adjoint restricts, because X is LX, or isomorphic, restricts to a functor on the slices, which is a left adjoint, and in fact is left exact. And this category of pre-sheaves sliced over X is actually equivalent to a pre-sheaf category of sheaves on some comma category. So from this definition, it's kind of easy to show that a slice is a Topos. And then you can also produce sub-object classifiers. If you can construct the sub-object classifier in pre-sheaves, and then the left adjoint gives you an item potent on that sub-object classifier, and you split off that item potent, that turns out to be a sheaf, and that's the guy, that's a theory that's probably familiar to some people. Okay, so this is a nice characterization, and it's the one that I'll generalize. So in infinity Topos, so I've put in red the things that change from the previous definition, it's an infinity category E, since there exists a small infinity category C, an accessible and fully faithful functor from E to the infinity category of pre-sheaves on C valued in infinity group voids. I will use this notation a lot. This will always mean pre-sheaves of infinity group voids. I can't spell pre-sheaves. And it has to have a left adjoint, and that left adjoint has to preserve finite limits. That's the definition. Okay. I do need to explain accessible. So that's a technical condition that did not appear in the original definition I gave for a Groentink Topos, and I don't want to dwell on this, it's technical, and it comes from the infinity categorical analog of the theory of accessible categories, accessible one categories. So I've written out the definition here, if I have some co-complete C, I can talk about a functor preserving kappa filtered co-limits, that's what an accessible functor is, or kappa is a regular cardinal, and the functor is kappa filtered, so a kappa filtered co-limit is a co-limit on a kappa filtered category, and there's a definition of kappa filtered category in terms of extending cones of kappa small infinity categories. So if you take out the infinities here, you get a very familiar notion of fact for omega, the caramel cardinal, and one category, that's the notion of filtered one category, and there's these cardinal generalizations, and all that carries out in infinity categories. This is done by Jacob Lurie in his big book. In a Groton-Dictopos, the one categorical notion, accessibility of the inclusion follows from the other actions. So it's accessible too, I just didn't need to include it as an axiom. Here I'm going to want to. Presumably I could leave it out, but I don't think that's been very well studied. We'll stick with this. All right. While I'm at it, let me introduce this other concept called a presentable infinity category, and this is the infinity categorical generalization of what's called locally presentable infinity categories, locally presentable categories, excuse me. If you don't know what that is, it's good to learn about. I wish I'd known this much earlier than I learned it. A presentable infinity category, it's the same list of definitions, except that I'll drop the fourth one that the left adjoint be left exact. It doesn't have to preserve finite limits. Almost anything of consequence ends up being a presentable infinity category, or in one category is a local presentable category. Categories of algebras like groups, rings, those are all locally presentable. So this is a very large class of things, which are always complete and co-complete and have various good properties. I don't want to dwell on this, but it's an interesting useful concept. So one question you may ask is, okay, I didn't use groentech topology in my definition, so where are they going to be? And I will come back to this question later. The following is true, though. If I have a groentech site on a one category, so a groentech topology on a one category, a usual notion of a groentech site, then I can form an infinity topos, which will be a full subcategory of the pre-sheaves of infinity categories on this one category C consisting of sheeps. For instance, in the special case, when I have a topological space, then C is the poset of open sets on X. The groentech topology is the usual topology, and the sheaves are the ones I defined in the previous hour. It's literally that definition I gave. And I could give a similar definition in the case of a general groentech site. So these are examples of infinity topoi. Are they all of them? We'll find out. All right. I do want to make sure I don't go over on time. There's a lot of material here, some of which I can pass over quickly. So what I want to do in this hour is talk about a characterization of groentech topoi that's more intrinsic. So what I'm going to talk about is analogous, although not identical, to what's called the Gero theorem, which is a characterization of groentech topoi. And I've written it here. I'm not really going to talk about some of the elements of this definition, but I just wanted to put it up here so you know what I'm talking about. One category is a groentech topos if it's locally presentable. So it's one of these nice categories. And then it has three more properties, which have a more elementary character. Co-limits are universal, co-products are disjoint, and equivalence relations are effective. We're not going to meet these last two. This particular formulation is not the one that's going to generalize to infinity topoi. So I don't want to take a lot of time talking about it, but there are characterizations like this for groentech topoi. It's a locally presentable category with some additional properties. So here's what one of the possible characterizations of infinity topoi you can write down. Analogous to the one I just gave for one topoi, for groentech topoi. An infinity category E is an infinity topos if and only if it is presentable, in the sense I described earlier. And then two more properties, which are the ones I want to focus on. Co-limits are universal. That appeared in my statement of Zerou's theorem. And co-limits satisfy dissent. So what I'm going to do now is I'm going to talk about this equivalence and these properties. I should say here before I go on, people often group these two things as one property and just call it dissent. If you have property three, you usually want to have property two, two, so they go together. I'm following the luri here and keeping the separate concepts. What I want to do first before I go into the definitions of these, I want to talk about homotopy theory. These things actually have roots in homotopy theory. And I think it's good to sort of establish, see how that works out because these kinds of conditions don't come from nowhere. They actually come from something that it already existed and was there to be sort of generalized. Another reason is this particular property of dissent that I'm going to talk about isn't satisfied in any one category at all, except the trivial one category. So there isn't a good one categorical model for it. And for that reason, it's good to give you some intuitions to how to think about it. All right. So I want to think a little bit about homotopy theory. So I've told you that the homotopy theory of spaces is somehow the same as the infinity categories, infinity group points, whatever that means. But people did homotopy theory long before they knew what an infinity category was. In that context, we talked about something called homotopy limits and co-limits. So going back very early into the subject, people recognize there are certain kinds of diagrams, often limit or co-limit diagrams in spaces that had a special role. For instance, when you write a space as a union of two open sets, and then you have the intersection, that's important. This behaves well for many reasons. For instance, it behaves well with respect to invariance like homology theories. That's the Myrovataurus theorem. Which tells you how to compute roughly how to compute the homology of x from the homology of the pieces you'd be in the intersect v. Another example are fiber bundles. So if you have a fiber bundle whose fiber is homomorphic to f, then this behaves well with respect to homotopy groups. You have a long exact sequence in homotopy. So these have a very important role, and there are special cases of what are called homotopy pushouts and homotopy pullbacks. So in homotopy theory, say in spaces, you can identify certain classes of commutative squares as being homotopy pushouts or homotopy pullbacks. One way you can do is you can say that a commutative square is a homotopy pushout. If you can connect it by natural transformations, which are weak of cobalances at every corner, to a square of a particular form, for instance, a square which is an honest pushout along a nice map called a cofibration. And analogously, there's a complementary theory where you take a pullback along a vibration, and that gives you the basic examples of homotopy pullbacks. So there's a recipe for computing homotopy pushouts or homotopy pullbacks. So if you have a homotopy pushout, very often you have some random map on one side of your square, and what you do is you factor it through a cofibration and a weak homotopy equivalence. And then you would take the actual pushout where you replace, that's called v prime, you replace the original, let's call it v tilde, v prime on the page, you replace the original map by that cofibration. You take the pushout and that's a homotopy pushout. So if you have a general square that doesn't involve a cofibration, that's how you compute it. So there are recipes like this that details aren't too important here. You can do this in spaces, you can do this in some plushel sets, which is also a model for the homotopy theory of spaces, or you can do it in any Quillen model category. When I say, by the way, a homotopy theory has a model, I mean in a sense of Quillen model categories, which gives you these vibrations and cofibrations and such. In spaces, this actually leads to very general geometric pictures. For instance, if I have a span like this, a maps to x and y by some maps, which might be complicated, not inclusions in particular, I can replace it by something called the double mapping cylinder. I connect the images of those maps by a tube, a times the unit interval. And then I'm really forming a pushout along of a, including to some replacements for x and y, which are x and y together with some tubes with a on the end. That's an explicit construction of a homotopy pushout. So it's a good geometric picture to have. So the classical understanding of homotopy, co limits limits was that they were derived functors. So they're in some sense a precise sense, the best homotopy invariant approximation to the actual limit or co limit in topological space. There are some push-offs sets or in general, in some Quillen model category, depending on what kind of homotopy theory you want to study. That was the classical understanding that was made formal in the 70s. But now in the infinity categorical language, these just correspond to what I called limits and co limits. So the infinity categorical limits and co limits, which are meant to capture a universe, you know, characterized by some universal, infinity universal property, correspond to the universal, the universal characterized by some universal, infinity universal property correspond to this older notion. Now, in homotopy theory, the homotopy theory of spaces play a special role. There are many other homotopies. There's chain complexes and chain homotopy equivalences. There's some official rings and all sorts of weird things you can construct. But spaces are, of course, special. And they're not just special because they're the first example. And they're special. Well, they're special for various reasons. One thing I want to emphasize is that in spaces, homotopy limits and comas have some additional properties, which are not shared by general homotopy theoretic settings. So one question that you might ask is, what are the properties of these constructions like homotopy limits and colons that are characteristic to classical homotopy theory, or what we would now say the infinity category of infinity group points? And the notion of infinity topos actually arises from one answer to these questions. So you may have heard or know that you can think of a topos as some sort of generalization of a category of sets, sort of a universe of generalized sets in some sense, it has features like the category of sets. And there's a precise analogy. An infinity topos is a universe of things that are like spaces from the point of view of homotopy theory. So this analogy is what is going to lead us to this characterization of infinity topo that I want to talk about. Okay, so still talking about homotopy theory. One thing I can talk about is what I'll call the universality of homotopy colons. And I'll just do this in this special case of pushouts because I can draw the diagrams. I'll think about this primarily in the simplicial set model. The simplicial set model has a feature, this underlying category, simplicial sets, is a topos. Preacheeves sets delta op, on delta rather. And I'm going to make reference to the fact that it is a topos. But I also care about the homotopy theoretic aspects. So we can think about homotopy pushouts and splishel sets and those, they're always weekly equivalent to pushout squares along monomorphisms. Now I could do the following. I could pick some map from y to x, call it p, and I can pull back the whole diagram, the whole square along that map p. So these are also, if you like, the pre-images, which is actually y zero, the pre-images of the x's along this p. And I get a nuke commutative square. Let's do it in this case. I've drawn this picture again. So there it is. When I pull back, so I form this by pullback, that the right hand square then maps the left hand square. Notice the monomorph, cofibrations by the way, I should have said here. In simplicial sets, it's very convenient in this case, cofibrations are exactly the monomorphisms. So cofibrations, polar cofibrations. Furthermore, you have an interesting property. This pullback square is also a pushout. That is the fact that pushout's are universal, as they say, in some partial sets. In fact, they are universal in any topos. If you pull back a pushout along a map to the target, you get another pushout diagram. Therefore, this other square, this square I said was some kind of homotopy pushout. It's an actual pushout, but it's along cofibrations. This one's also a homotopy pushout. Let me put one more thing into the mix. Let's suppose that, oh, sorry, I can keep going. This is just a slide explaining what universality of colemnus means in a one topos or in a one category. So colemnus of universal just means that if I have any morphism, if I consider the base change functor, which I'll call f upper star from the slice over x to the slice over y, that preserves all colemnus. That's the definition of colemnus or universal in a category with pullbacks. In a one topos, it's even better. You actually have a right adjoint. Actually, you have two adjoints. You have a left adjoint always, but there's an interesting right adjoint. Because it preserves colemnus, you expect it to have a right adjoint. It does. Okay. So that's just a problem of topos. But now I want to put in the homotopy theory. I'll take my original, that diagram I had before, but now I'll actually draw it as a cube. So the top and the bottom squares are pushouts. And in fact, are examples of homotopy pushouts because they're along monomorphisms. And I'll suppose that this map here is a confibration, which is the correct notion of vibration in some push-outs. It doesn't matter what it is. The important thing is that if I form pullbacks, I get confibrations along all the sides. That's what these double arrows mean. And therefore, by my general theory of homotopy pullbacks, all four sides are homotopy pullbacks. So I have a commutative square where I took the bottom scorches, the homotopy pushout, and then took the homotopy pullback along p of everything. And I observe I get a homotopy pushout along the top. So we'll give this property a name. We could say that homotopy pushouts are universal in the homotopy theory of some push-outs, which is the same as the homotopy theory of spaces. That's the, if you like, homotopy, the universality of homotopy pushouts. And you can do this generally for any kind of homotopy code. I won't try to draw or define arbitrary homotopy code diagrams. The same thing works. You actually only need two cases. You need homotopy pushouts and you need coproducts. Those two cases, you can drive everything. All right. Now, there's a more subtle property called descent. It's kind of called descent. It's kind of the what happens if you do things in the opposite order. So the idea was that in this picture, I started with some kind of pushout, a homotopy pushout. I pulled it back along a map to over the co-limit itself. And then I got another pushout. I'm going to do this in the other order. I'm going to start with a commutative diagram like on the left here, where the squares are both homotopy pullback squares. And I'm being a little bit careful here. I'm not assuming they're honest pullback squares. A homotopy pullback doesn't have to be a pullback. It's just a constructed. That's the most convenient way to compute it. It just has to be weakly equivalent to a homotopy pullback square. But if I have a commutative diagram like this, I could take the homotopy co-limits horizontally, the homotopy co-limit construction to be constructed as an honest functor, so you'll get a map between homotopy co-limits. And then the descent condition says that if I form commutative squares, which I have for each i, which involve the inclusion of each of these x i's into x and the corresponding ones of the y's into y, this resulting square is a homotopy pullback for i, for all the values of i. So I start with a diagram with these pullbacks. I push out and then I pull back again. And the descent says I get back to where I started, except they're homotopy pushouts and homotopy pullbacks. Let me draw a picture. So let's take a map and spaces. So I'm going to draw the picture on top, but it's a possible sense. It's the same. I would draw exactly the same pictures, because honestly, as you can tell, I think of them as almost the same thing. Well, let's take a diagram like this. On the bottom, I'll just have the one point space and the two point space. And then on the top, I'll just have copies of x, x over the point, x over the point, and then two copies of x, one over each point. I'll have this commutative diagram. I'll use the identity map in most places along the top, but here I'll use f in one of the places. So if I form a homotopy pushout horizontally, on the bottom, so what I'm going to do is I'm going to form these double mapping cylinder constructions. On the bottom, I'll just get a circle. On the top, I'll get a construction, which is also called the mapping cylinder of f. So geometrically, you take x times an interval, here it is, and then you glue the n's together using the map f to identify the two n's or points in one end with the other n. If f is a homomorphism, this ends up being a fiber bundle with fibers homomorphic to x. And furthermore, you get maps from each of these things back into the whole thing, although you get some pullback squares. By the way, I should say here, when f is a homomorphism, these are actual pullback squares. And so we're in a situation where you have actual pullback squares and we get a fiber bundle. And then we check, oh, if we pull back, we get back the original things. The fibers of this P are actually x. My picture, by the way, looks kind of like a Klein bottle because I can't draw very well. But also, the Klein bottle is an example. You use the sort of the obvious inversion of the circle that produces the Klein bottle. But I could instead do the following. Maybe f isn't a homomorphism, it's just a homotopy equivalence. Then you don't get a fiber bundle in general. But you can still say that the homotopy fibers, that is, the homotopy pullbacks along any point are homotopy equivalent or weekly equivalent to x. So you get a vibration up to homotopy, which has the correct fibers. If it's a homotopy equivalence, of course, if it's a homotopy equivalence, these aren't pullback squares anymore because you're using some weird map, which isn't the homomorphism. So in particular, the right hand one could fail to be a homotopy, fail to be a pullback. But if it's a homotopy equivalence, there'll still be homotopy pullbacks. This is not true in sets, in the one category of sets, where I'll just take equivalence to mean isomorphism, and I don't have homotopy flow, so I just have pullbacks. Let's draw the same picture. I have a set x where, and I have some you know, automorphisms in my set, I'll take pushouts. The pushout of the bottom is just a single point, not a circle. The pushout of the top is, well, it's a quotient of x where I sort of identify any point with its image under f, the orbits of f. And then for like i, the one or two in my original diagram, I'll get squares like this. And I can ask myself, is this a pullback square? And the answer is that this is a pullback only if, if it's actually the identity, but most of the time it's not a pullback. It just fails. Descent does not work in sets, descent even for pushouts. I should clarify, in fact, in a one category in general, this doesn't happen very often. It does work sometimes. For instance, if the maps along the bottom are monomorphisms, then you actually, it's okay. If you take a code, if you take this diagram with actual pullbacks in sets or a growing dectopos, and the horizontal maps are monomorphisms, then it is true. But if you push out and then pull back, you get to where you started. But in general, it is not true. Okay. So this page is for the homotopi series. I'll just pass by it. The summary is that I don't know who first understood that descent was a thing. I think it's Graham Siegel, who wrote a paper where he said it was well known, but then apparently that held up the paper for a long time, getting a proof of that fact. All right. I have here a sketch of a proof. Shall I give this or shall I pass over it? I want to say one thing about this. So you actually have to prove descent in really the key cases, pushouts. So you can do this in some official sets. You write down one of these diagrams, where you have these pair of homotopy pullbacks. You can always set things up so that the horizontal maps are co-fibrations down here. And then these vertical things are fibrations. You replace the squares by equivalent squares. That's a standard reduction. Then you just have to form the homotopy co-lement. And that will also be the form the co-lement rather. And that will also be the homotopy co-lement. And now you're asking if the resulting squares are pullbacks. Now there's a special case which has to work. It's when the squares are homotopy pullbacks and actual pullbacks of some official sets, pullbacks in the underlying category. They're actually pullbacks, which generally a homotopy pullback doesn't have to be. So all you got to do is you got to replace your diagram with one where the squares are pullbacks. And the tricky part is you have to do it for both squares at the same time. So you're reduced to one particular problem. So I've expanded out one of these squares. So the original community square involved y0, y1, x0, x1. I have some co-fibrations and confibrations. There's the honest pullback. And it's a homotopy pullback means that the honest pullback along this confibration is weakly equivalent to the other states, the one that the other special set that was originally there. What you want to do is you want to find something that goes here so that the top square is also a pullback so that this map on the side is a weakly equivalence and this map is a confibration. If you can do that, then the whole rectangle is a pullback and it's a model for the homotopy pullback you wanted. Well, there's something you could try. If I have a map and some special sets, I have a pullback functor and that has two adjoints as I pointed out, topos. And I'm interested in this right adjoint. If J is a monomorphism, then in fact, if you form the right adjoint and then pull back, again, that's the identity, that's a formal property. So if I put in here pi J of y0, J is this map here, I'll get a pullback square. And so the miracle is that this actually works. That's actually the solution to the problem. So I'm going to put in here, I'm going to put in here, I'm going to put in hereMyP racially works. That's actually the solution to the problem. That's, so I've now constructed an actual pullback square, which is also a homotopy pullback square. The hard part is showing these properties. And that's the solution. I mentioned this because there's a long literature attempt at proofs website descent in various contexts. And they're often quite technical. I'm even responsible for such a paper, but nowadays there's a clean proof. This proof is due to Vojvodski. He introduced this argument not to prove descent per se, but to prove something which is called the fact that simplicial sets are a model for a univalent type theory. I may mention something about that before the end of these lectures. Okay, so I just wanted to sketch out that there is a proof to this in the language of classical homotopy theory. We're quite possibly, yep. All right, okay, so that's the end of this this excursion to homotopy theory. So I'm gonna return to the infinity categorical setting and give you some proper definitions. So if I have an infinity category that's complete and find, so co-complete, co-limits, and finite limits it has, then I'll say it has universal co-limits if for all morphosomes, the induced Fobac functor preserves finite limits. Is left exact. Ooh, yeah, of course, it's wrong. Preserves co-limits, duh. It is a finite limit. Okay, that's university, all the of co-limits very easy to define. Now I wanna give descent. So this is a little bit more tricky. So okay, some definitions. Suppose I have a natural transformation of functors to an infinity category from an infinity category. I'll say it's Cartesian. If for every morphism in the domain category, every one morphism, every one cell, the result in commutative square I get is a pullback in E. This definition makes sense for one categories. I'll call this a Cartesian natural transformation. Now let's consider the arrow category of my infinity category. So delta one is the walking one morphism. So I'll call this E with an arrow. And I'll define a subcategory of the arrow category, which I'll call Cart of the arrow category. This is the subcategory. It's not full, but it's wide the right term. Bluff, it has all the objects. But it only has the morphisms that are Cartesian transformations. In other words, the objects are arrows, but the morphisms are just the pullback squares. And that actually does turn out to be an infinity category. And now I can define descent. I'll say that E has descent if this Cart of the arrow category has all co-limits. And when I say that, I mean all small co-limits as one does. And if the evident functor, the inclusion functor, back to the arrow category preserves co-limits. That is descent for all co-limits. You can also talk about descent for particular shapes of co-limits. You just restrict the co-limits of particular shape. I put this reference here because although in 1974, Popo did not know what an infinity category was. He only had homotopy columns. He nonetheless actually wrote down a formulation of descent in the homotopy theory of spaces, which is almost exactly the one I've just told you. All right. So given this, I can now give you the big theorem. An infinity topos, sorry, a theorem is that infinity topo have universal co-limits and descent. I'll sketch the proof. It's in steps. First you show it for infinity group weights. And that was the purpose of the previous discussion. This uses the simplicial set model of infinity groupers. Well, some infinity groupers are con complexes. So it sort of was our definition. So those arguments I gave you before are the proof and simplicial sets. I don't know any other way to do it. Then from that, you get that it's true for pre-sheaves with values in infinity group weights because limits and co-limits in a pre-sheave category like this are computed point-wise. So you can use the fact that you have these properties and S to prove them for pre-sheaves. And then finally, you recall that a general infinity topos is a left exact localization of a pre-sheave category. And so you use the properties of left exact localization. L and I preserve finite limits and small co-limits. And those are the things that appear in these definitions, pullbacks and co-limits. And so, you wanna prove descent, you write down your diagram in E, form the pullback. You compute the limits in E and you compute the co-limits by going into pre-sheaves, computing it there and then applying the left adjoint. And that's also compatible with forming the limits because everybody preserves the finite limits, the pullbacks. It's just the obvious argument. Every infinity topos has these properties because infinity group points do. We actually get a characterization. An infinity category is an infinity topos if and only if it's presentable, co-limits are universal and co-limits have descent. I've already told you one direction. So I'll sketch why the other direction is true because it's interesting and it is an illustration of descent. So this is the key property in some sense. It's the one that doesn't happen in one categories. So that's the one we want to sort of keep an eye on. All right, so I'm gonna briefly sketch this through. The first step is formal and is really sort of an application of this theory of accessible infinity categories, the analog works the same way in one categories. So if you have your infinity topos, you want to first find an essentially small subcategory C which is closed under finite limits and such that the restricted yonata functor. So you have a yonata functor for E but then you restrict the functors that precedes to the subcategory. You want that to be fully faithful and have a left adjoint. So everything except the left adjoint being left exact. Also, I guess I also want I to be accessible because that was also a condition. So that comes out of this theory of accessible infinity categories. So you just pick C to be big enough. So you might say something like this where this is some full subcategory of what in this theory are called CAPA compact objects. In the theory of the one category called theory, the usual term is CAPA presentable that we changed all the terms for some reason. So with some if you like a size condition. This is sort of standard idea. It works just the same way except that you need to do several hundred pages of work just to make sense of it. But in the end you get to where you wanted. So I have everything except L being left exact. And so of course I need the following proposition. So if I have E is a co-complete finite complete infinity category which has these two properties of universal co-limits and descent. I have a small finite complete infinity category. And then I have a co-limit preserving functor L. Then L is left exact if and only if it's composite with the Yoneta functor is left exact. I hope it's clear. I always write row for the Yoneta functor for some reason. So that's what I need. There's a list works in one case or what's the work for topoi. So this works with this is also true if E is a one topos. You can characterize this meaning if I replace infinity group words with sets and that's co-limit preserving. Then if CS finite limits I can determine these sunkers by using from this property restricting on the Yoneta. If that was the whole story, I would just stop here and say it's like one case but the proof is actually a little different because it actually gives you something a little bit stronger. This thing on this page is does not work the same way as it does in the classical setting. So let's suppose I have a co-complete finite complete infinity category universal columnist descent and C is small, but maybe it's not finite complete. So I'll take that away. Same conditions, L is co-preserving. I wanna know when the L is left exact. It's left exact if only if two things are true. One is that L preserves the terminal object and the second is that L preserves pullbacks of the form like this. They're pullbacks of span or co-spans of representable functions. So that's a particular class of pullbacks and pre-sheaves. If L preserves these and it preserves the terminal object then it's left exact. This does not work in the same way if E is a one topos. As we saw yesterday, there's a condition that you have to put called, I believe filtering which is not this. It's a little bit more complex to state. But infinity categories, you get a very slick looking condition. It's actually kind of a miraculous this works out this way. It's a difference between the infinity and one sex. It's really a difference between sets and infinity group points. That's what's different here. Not using sets, I'm using infinity group points. It doesn't work in a one topos. Oh, here's the illustration of why it doesn't work in a one topos in case you're wondering. So let me take my category C to be a group and my topos will be sets. My co-limit preserving functor will be co-limits. So I can form a pullback in the pre-sheave category which are just sets with a G action where I'll take the terminal object and I'll take the representable functor just G acting on itself. By the way, it's a hypothesis of that we're in this statement are true. Co-limit of the terminal object is the terminal object and it preserves pullbacks of representables because well, because a group has pullbacks. So pullback of representables are representable and therefore it has to preserve it. But here's a pullback diagram that's not a representable because the terminal object doesn't. So the pullback is really the product. If I form the co-limits of the G action, then I get a data. I just get the point in three locations. This thing will be isomorphic to as a set to G. Although this was a pullback, when I take the co-limits with respect to G, this is not a pullback. That's the illustration that this doesn't work in sets. One way Siebel sometimes talk about it is they say that pullbacks are not a sound doctrine. But that's in only one connection. Infinity categories pullbacks are a sound doctrine. All right, so I have a few more minutes. Let me sort of sketch a proof I may go through this quickly. So this is my setting. Co-complete finite complete infinity category, universal co-limits. I won't need dissent for this. Here's a sort of a special case, which is much easier to prove. C is small, I have a column preserving L. I'd like to know when it preserves pairwise products. And here are the condition it preserves pairwise products if it preserves pairwise products of representatives. So I mentioned this special case because this works the same way in one categories. The proof is the same as for infinity categories. This uses the fact that co-limits distribute over products in a topos. When I say it works for one categories, by the way, I mean it works in a one-topos. It also works if he is a one-topos. So you do have universality of co-limits in a one-topos. In particular, you have this property that if you take a product of two co-limits over two different indexing categories, that's equivalent to the co-limits of the product over the product of the indexing categories. And you can use that to prove that this implies this. It doesn't need dissent, so it works for classical topo. Now let's think about this general case of pullbacks. There's a special case I get immediately in my setting. So this is the special case where I have a pullback, this is in the category of pre-sheaves. So it's a pullback of the diagram of pre-sheaves where B is itself representable. So saying that L preserves this pullback, well, this pullback is really a product in the slice category, and L carries the slice to the slice over L's B. These are both infinitive, well, they both have universal co-limits and dissent, I'm not writing very clearly, universal co-limits and dissent, they inherit it from pre-sheaves and from E. But of course, this is also a pre-sheave category, it's equivalent to one, it's pre-sheaves over the slice because that's representable. And I just told you the recipe for knowing that something preserves products, it just has to preserve products of representables and a product of representables in here, well, it's really a diagram like this. It's one of these pullback squares where it's a over coast plan of representables, so that's the lemma. So the lemma gives us this special case where it's a pullback where I have comma representable. I guess this part of the argument also works in the topos, I think. The problem is the general case. So here I have a general pullback in pre-sheaves. I'll write the bottom object as a co-limit of representables. Then for each of those representables, B, I, I can pull back the whole pullback over that representable. So I'll get a, for each object in my indexing category, oh, sorry, I'm kind of screwing things up here. This is not a, these are not pullbacks in I, these are pullbacks in the functor category from some indexing category to pre-sheaves. Sorry, forgot there's another indexing here. So for each I get a square like this and that's just a pullback square in pre-sheaves. So I got this whole collection of pullbacks. I have this pullback of diagrams. I should have said what categories these were in. You get this pullback in pre-sheaves for each I. But when you pullback, pullbacks, you get more pullbacks. So you get a bunch of cubes, all of whose sides are pullbacks. So for each morphism in the indexing category, you get pullbacks of pre-sheaves. However, we also know that B by construction was the co-limit of the representables. And then we have this property that co-limits are universal in pre-sheaves. So if I take the co-limits of these other functors in the P, the P sub I, X sub I and X prime sub I, those also recover P, X and X prime. Now, in all these pullbacks, the bottom right object is a representable. And we know that L preserves these pullbacks over a representable because that was the special case. Well, actually, not the left hand square, but there's a rectangle where they're both pullbacks. And the rectangle and the right hand pullback are over one of these representables. And so you can patch pullbacks. So the left hand pullback will also be preserved by L. All right, so what I'll do is I'll apply L to all these. So I've done that, shown this here for this picture. Apply L, I get some more pullbacks. Oh, but L preserves co-limits. So if I take the co-limit of the PIs, the XIs and so forth, I get L of P, L of X, L of B. And I have this diagram on the top. If you think about this as these natural transformations, these are actually Cartesian natural transformations. These things here. Sorry, I wrote these here. These represent Cartesian natural transformations of functors from I into Preciase. Each edge here. Then I have this property called descent. Descent says that this has co-limits. Actually, these are in E, not in Preciase. So this has co-limits and the, if you like, the inclusion functor preserves co-limits. So these are co-limits in the Cartesian category, but they're in the Cartesian category. So that means that these are also pullbacks. I form the pullback. L of P is a pullback of these LPI's. Sorry, I form the co-limit. L of P is the co-limit of the LPI's. So if I pull back again, I get pullbacks. That's descent. So what I care about is this map, L applied to the original pullback. I'd like to know that's an isomorphism. What I've shown is that the pullback along each of these LPI's is an isomorphism. The pullback of this. I didn't write the squares here, but the pullback of the original square of this square is, is the square LPI to Lxi Lbi to Lxi prime. And then universality of co-limits. And he tells you that F is an isomorphism because LB is a co-limit of the LBI. So if you pull back to all the pieces of the co-limit and you've gotten isomorphism, then you had to have an isomorphism. I hope clear enough. This was the key step. I used descent here at this step. Otherwise I could have done this in a one-topos, but this is the key step. So that's the proof of that property. So that's the story of a characterization of the infinity topo. What I want to do next time, which I guess is tomorrow, is to develop some consequences of that. The most interesting one, which is the main one for the first hour next time, is to talk about the object classifier. So you may remember that topos has a sub object classifier. Infinity topos has an object classifier in some sense. So you have something that's a little bit better than a sub object classifier. And I'll say what that is next time. And I'll also talk about some other aspects, which are truncation and connectivity. That's the first hour and then the second hour, do some more things. We'll leave that till tomorrow. Okay. Thank you. Okay. Thank you very much for your very nice lecture.
In this series of lectures I will give an introduction to the concept of "infinity topoi", which is an analog of the notion of a "Grothendieck topos" which is not an ordinary category, but rather is an "infinity category". No prior knowledge of higher category theory will be assumed.
10.5446/55522 (DOI)
Okay, so I'm very pleased to introduce Professor Reiske again. Thank you. So I will continue today to talk about some properties of infinity topoi. So let's see. So last time I defined the notion of an infinity topos. It's an infinity category that's a left exact localization of a pre-sheave category, pre-sheaves of infinity group oids, on an infinity group oid. And then I spent some time describing an equivalent characterization. It's a presentable infinity category with universal code limits and descent. So it'll be used, you know, well. Okay. So I want to return just briefly to topology again to motivate one more idea. So in topology you have the notion of a fiber bundle. So it's a map P whose fibers over any point are homomorphic to F or more precisely is locally equivalent to a product of F with the base. That's a fiber bundle. So one of the great theorems of topology is that you have universal bundles and we have even have a universal bundle with fiber F usually under some hypotheses. But I will not even attempt to state those. So one way to describe this is it's a bundle associated to a space called BG, which is the classifying space of a topological group. It's the topological group of homomorphisms of the space F. And then it's a group you can form a principal bundle, the universal principle bundle of G, but we want the universal associated bundle with fiber F which is constructed as a Burrell construction. All right. And then you get that correspondence on bundles, fiber bundles with fiber F correspond to maps from the base space into this classifying space BG. And this is say up to equivalence or up to isomorphism of spaces over the base and on the other side up to homotopy. And of course, there are hypotheses B usually has to have some niceness property. It's not fully general, but it's great. It's a great theorem because it connects something that's not actually operate about homotopy theory. That's about topological spaces, a particular form to isomorphism and says they're actually classified by homotopy. However, so if you're a homotopy theory, you kind of want everything to be homotopy theory. So you can actually think about modifying the problem. Instead of looking at fiber bundles, I'll just look at arbitrary maps instead of trying to arrange for these fibers to be homomorphic to F, what I'll arrange for is for the homotopy fibers to be weakly equivalent to F. So I can form a homotopy fiber, which is the homotopy pullback of the map along any point in the base. And I can try to classify these. So I could ask for a universal bundle, which will have a similar form, but will G will be something which I'll call H or F. And this exists and it has the property if I look at arbitrary maps, E to B with homotopy fiber, weekly equivalent to F. Those correspond to maps from the base into BG. Here again, it's up to equivalence over the base here to be weak equivalence. So equivalence relation generated by that property here up to homotopy. So these exist. So this is not just an exercise in pure thought, by the way. Examples of this were important early on in areas like surgery theory. In terms of sphere bundles up to fiber homotopy equivalence as it was called was something you had to understand. This was actually something that was started to be developed very early on in the subject. So what is this H or F? Well, I'll take the space of all maps and we have to F and then I'll notice that there's a subspace which consists of homotopy equivalences. It's actually a union of components of that space. This is not a topological group, but it's a topological monoid. Although it isn't a group, it's what's usually called group like as a monoid because although it doesn't have inverses, it has inverses up to homotopy by definition. Any point in that monoid, for any point in that monoid, there's another point such that the product is in the same path compliant as the idea. So this was, as I said, the work in the subject by various people. There's a good formulation in 1975. Well, here's an example just to sort of orient you to how this looks in practice. So for instance, let's take F to be an Eilenberg Maclean space. So maybe G is an abelian group and N is at least two. This is a space whose enthomotopy group is isomorphic to the group G and whose other homotopy groups are trivial. Then you can actually compute what the monoid, topological monoid of homotopy automorphism looks like up the homotopy. It's got a subgroup, if you like, that's actually the automorphisms of the abelian group that acts. But there's also a part that's the Eilenberg Maclean space itself. It's actually an extent, it's equivalent to a topological group. It's an extension of two groups, the discrete automorphism group of G and the Eilenberg Maclean space itself, which can be up the homotopy given the structure of an abelian group. And then you learn that if you want to classify maps whose homotopy fiber is an Eilenberg Maclean space, that is whose homotopy fiber only has homotopy group in a given dimension, then that's equivalent to maps into this classifying space. And that's useful because since we have a description of this classifying space, we can describe, we have a description of this monoid, we can describe this classifying space. For instance, this classifying space only has two non-trivial homotopy groups. In Pi 1, it's the automorphism group of G and in Pi n plus 1, it's G. That's off n is very neat with 2. And you can do similar things when n is 1 and G is just a group. In this case, the answer is more complicated. So this is basically what we might call a two groupoid of one group that look like G. You can describe its homotopy groups. Pi 1 is actually the outer automorphisms of G and Pi 2 is the center of G. And the other homotopy groups are trivial. This gadget is related to the problem of classifying extensions of groups where the kernel is G. Okay, so that just gives you an example of how this might work in practice. So it's very tempting at this point since you can do this just to put everything together. By specifying the fiber, that seems like making things a little bit difficult. Let's take not maps with a given homotopy fiber, just arbitrary maps up to the equivalence relation given by diagrams of this type, commutative diagrams. Well, those are going to correspond to maps up to homotopy to the disjoint union of all these classifying spaces of the homotopy automorphism monoliths where I take the co-product over the collection of isomorphism classes or rather the collection of weak equivalence classes of spaces. Call this thing, I don't know, omega. It's kind of a universal map. So over this guy, there'll be some universal example of a map. So I'll call it the domain of that omega star. Of course, this is not really in the category anymore, it's large. So you have to make sense of what that means. This is kind of a characteristic property. I think of it. Characteristic of infinity group. It's not general shared by infinity categories. You're not going to find in most infinity categories a universal map of this type, but it does work in infinity group points. And well, of course, it's going to work. You can do this in infinity topos. I should say here as an aside, so up to this point, I was working with sort of an explicit model of infinity group points like topological spaces. Here, I switched to thinking about an infinity category. So I'm using a different language. But if I take this omega, which is some kind of large infinity group, there's another large infinity group what I can think about. It's this. So this is an infinity category of infinity group points. This symbol means I'm taking the maximal infinity group void inside this infinity category of infinity group points. Well, that's actually what omega is. I actually kind of said something like this in the first lecture. If you go back, I said functors into s are equivalent to the slice over s, which is a very strange thing to say, but it's true. And I said the I said the correspondence works for infinity group points, but it need not work in arbitrary infinity category. But you can build these kinds of things. Okay, so let me return to a more formal setting where I'm talking about infinity categories. So recall, if I had an infinity category, E, I have the arrow category, you have an arrow. And inside there, I had this subcategory not full of the Cartesian subcategory, whose which has all the arrows as objects, but the morphisms are just the pullback squares. And what I'm asking for the dream is that you should have or would like to have a terminal object of this category. That's what I'm asking for. I'm asking for a universal morphism. I cannot generally have this. So I'll ask for something close. What I'll ask for is a sub terminal object of that infinity category. And this is what I'll call a universal family. So it's sub terminal or what's an equivalent terminology minus when truncated. So what this means is if I take morphisms, the mapping space in this category of some arbitrary object, arbitrary object into you, this is going to be either equivalent to the terminal infinity group or the empty infinity group. If it was actually terminal, it would always be equivalent to the terminal infinity. It was a terminal object. So the collection of universal families is I mean, they exist as a partial order inside cart because of this property, equivalent to a partial order. So of course, I can that definition makes us perfectly sensible in a one category. And in a one topos, a Grotendie topos, for instance, you have a sub object classifier, which I'll write as a morphism from the terminal object to omega upper mono. And of course, that has the property that monomorphisms e to b up to isomorphism correspond to maps from the base to this classifier. That's a sub object class. That is an example of a universal family. In set, it's the largest universal family. So every universal family in set is a monomorphism. They're in fact not very many that are all sub objects of the universal classifier. I'm sorry, the sub object classifier. When I was preparing this, I was actually going to say that that's true in any one topos, but I couldn't actually come up with a proof. So maybe that's not true. I don't know. Somebody can figure that out for me. Okay. So I've defined this notion of a universal family. It's a particular kind of morphism that's a sub terminal object of this Cartesian category. I could formulate that as saying equivalently, as saying that the forgetful functor from the slice over you back to the Cartesian category is fully faithful. If that was an equivalent, so to actually say it was a terminal object. So this is saying it's sub terminal. So I can talk about the essential image of this functor, which will be a full subcategory of Cart E, which I'll call that L sub U. And that's an example of what I'm going to call a local class. So a local class is a full subcategory of Cart E arrow with two properties. The first is that it's closed under base change, which since the morphisms and Cart E arrow are all pullbacks anyway, I can just say that if I map F to F prime in there, and F prime is in my local class and so is F. And then the second property is that L has co limits and the obvious functor from L back to the arrow category preserves co limits. That's the definition of a local class. If you remember from last time, this notion of descent I talked about, it's actually equivalent to saying that this thing itself, Cart E arrow is a local class. So if I have a universal family, I get this corresponding local class L sub U. So that's the collection of morphisms that are pullbacks of U. That's the collection of all morphisms in E that are pullbacks that can be obtained as a pullback of U. That's actually what's called a bounded local class. So what does that mean? So let's suppose I have a local class L, it's a full subcategory of Cart E, and there's a functor from Cart E arrow back to E, which is the target functor. So objects are morphisms. Just send it to its target. Let's pick an object of E and I'll form pullbacks of infinity categories. These are pullbacks in infinity categories. So here, so if I look at the arrows whose target is B, that pullback is going to be the slice E over B. It's equivalent to the slice. Except that I'm actually looking at Cartesian squares, which are lying over the identity map of D. And because they're Cartesian squares, they're pullbacks, the top maps also unequivalent. So this is actually the maximal subgroupoid of the slice. So that's an infinity group, potentially a large one. But now I have this full subcategory. And so I can restrict to the subgroupoid, which I'll denote with this lower L. So this is the infinity groupoid of, you know, whose objects, span by objects in the slice, which are in L. So it's the morphisms over B, with target B in the local class. And so to say that the local class is bounded is just to say that each of these things, these infinity group oids of things that are in the local class is essentially small. So it's an equivalent to a small infinity groupoid. That's what it means to be bounded. Now, in the case where I actually have a class that comes from a universal family, well, you can actually compute what this thing looks like. This infinity groupoid is none other than the space of maps of B into the domain, the co-domain of the universal family. And well, it's a space of maps. And those are always essentially small, if we have a locally small infinity category, which all these are things like ER. So that's a balance. It's kind of a size restriction on the local class. Okay, with this notion of a bounded local class, I can now assert a correspondence. The universal, so of course, this is for an infinity groupoid. Sorry, this is for an infinity topos. The universal families correspond exactly to the bounded local classes. The idea here is, well, to each be I can assign this essentially small infinity groupoid, which is determined by a local class. Oh, so I should say here that the direction from left to right, I've already described how a universal family gives you a local bounded local class. So I need to tell you how to go the other way. So if I start with my bounded local class L, then I get this functor. And it's a functor from E op. Well, it's two infinity groupways, and it's just small infinity group, it's really essentially small. Let's just pretend there's the same as small. Furthermore, because you have these properties universal co limits and descent in your infinity topos, this functor preserves limits. There's a functor from E op to S. So it looks like a representable functor. And then since E is actually presentable, some nonsense that you have for presentable infinity categories tells you it's a representable. And you can use that to build the universal family, it's actually representable by an object you that will be the codomain of the universal family. Okay. So that's telling you that. Well, what is telling you. All right. So finally, it turns out that in fact, every morphos of an infinity topos is contained in some bounded local class. So again, this requires a proof, which ultimately depends on the presentability hypothesis. So I can talk about sizes. So the idea here is that this category of this Cartesian Cartesian arrow category is a union of bounded local classes, let's say L kappa, where L kappa is defined as the bounded local class of relatively kappa compact morphisms, kappa some regular card. So you put some kind of size restriction, you get a bounded local class. And if the sizes become large, you get everything. So as a consequence, you get an exhaustive collection of universal families. Every morphism is in the bounded local class associated to some universal families. Every morphism in E is a pullback of some universal class. It's kind of nice to think about the union of those universal families as giving you a map, which lives in a higher universe, which is this object or morphism classifier, the universal map, let's say. This also gives you yet another characterization of an infinity topos. It's an E is an infinity topos. If and only if it's a presentable infinity category, it has universal co limits. And it has enough universal families. That is every morphism is a pullback of some universal family. Okay. I feel like at this point, there's a slide here I didn't make. And maybe I feel like I should talk about it. Sort of important. Let me just say something here briefly. So this characterization of universal family, it's a universal property, right? It's something that it's a sub terminal object. There's a more intrinsic characterization of a universal family. And that's something called a univalent map. So if I have a morphism in an infinity topos, I can form something that I'll call ISOP going to be times B. Or actually, you can do this in a one topos as well. So what is this guy? So to tell you what this is, I'll tell you what a map into ISOP is. Let's say it's a map over this projection. It's called pi. Let's say I know what f and gr, that's two maps from T into B. Those correspond exactly to giving a diagram of the following type. I can take P and pull it back along either f or g. So I'll call it f upper star of E or g upper star of E. Those are the pullbacks of P along either f or g. And I can look at the collection of all maps that make a commuter triangle where this is an isomorphism. It's an isomorphism classifier for this map, if you like. So this thing exists. In fact, you can make the same construction in the one topos. And we say that a map P prime to U is univalent, if and only if. So there's one more map that exists here. I should have mentioned. Let's call it I. So there's a tautological isomorphism of two pullbacks along the identity map, which is the identity map. So this in some sense classifies the identity map as an isomorphism. So P is said to be univalent if the map from the base, maybe I should use the same letters here, just to be consistent. If and only if B, going along I into isop, is an isomorphism itself. And then the theorem that you can prove is that P is a universal family, if and only if it's univalent. So this is the univalent that appears in the univalent type theory. This is how they recognize their version of universal objects. I'll say a little bit more about that at the end. I did want to make that definition. I'm going a little bit behind, but I think it's okay. All right, let me pause for a second and get my wits back. Okay, so let me describe yet another closer related characterization. So there's a notion of a van Kampen co-limit. So let's say I have an infinity category, which has pullbacks. Then as we've noted, if I have a morphism in E, then I get an associated pullback functor. Let's look at a co-limit cone inside my infinity category. So J is an infinity category, probably small, and it'll take the right cone. So formally join a terminal object. And then you can have a notion of a co-limit cone. It's a van Kampen co-limit if the induced functor from the opposite, oops, to large infinity categories, the hat means it's large, which sends J an object to the slice is a limit cone in infinity categories. So slice takes co-limits to limits of this type, or rather it takes this co-limit to a limit. That's called a van Kampen co-limit. So this makes sense even in a one category. And so for instance, in a one topos, co-products are van Kampen. If X is an object in E that's a co-product of some things, then the slice over X is equivalent to the product of slices over the XIs. In general, pushouts or other co-limits are not van Kampen, you know, one topos. Sometimes they can be, as we saw earlier, pushouts along monomorphisms are van Kampen. But in general, that's not the case. However, guess what? In infinity topos, all pushouts are van Kampen. In fact, we have the following theorem. E is an infinity topos if and only if it's presentable. And all small co-limits are van Kampen. And I'll just briefly sketch the idea. So let's say I have an object in E that's a co-limit of some functor from a small infinity category, I, so I can form the slice over X, or I could form the slices over the XI and then form this limit in infinity categories of those slices. Well, I've got functors in both directions. So there's a functor from left to right that's built from the pullbacks of the sort of the tautological maps from XI to X. By the way, I'm getting the points where, you know, if you want to actually make this rigorous in infinity categories, you have to do a lot of work. The way I'm talking about this is rather imprecise. So there exists such a pullback, but its definition requires some thought. But nonetheless, you can define that. And there's a functor going the other way, which is taking co-limits. If you have a collection of maps, which are sort of, you know, in this functorily related, the fact that this represents an object of the limit means that it's a Cartesian natural transformation. So you can form the co-limit. And the fact that these two things give you the identity amount exactly to universality of co-limits in this way, and in this way, that's descent. So modules are actually carrying out that proof. That's the idea. I can answer that question now. Yeah. So in one topos, in a one topos, all co-limits, including co-products are universal, but there's another property that co-products are disjoint. And those two co-products, universal co-products to disjoint, give you this property in a one topos. All right. So you can say a little bit more even. So I have this functor from EOP to large infinity categories that takes me to the slice. I could instead introduce a bounded local class and not use the whole slice, but rather the full subcategory of the slice that is spanned by arrows over A, which are in my bounded local class. So of course, the point here is that this is actually a small infinity category. We saw already that its maximal subgroupoid is a small infinity groupoid. This functor, well, it preserves limits using descent and well, using universality of co-limits. And because E is a presentable infinity category, you can show that it's representable by an internal infinity category object. Let's call it U dot L of the infinity topos. So let me not tell you what an internal infinity category object is. There is such a definition. And this even leads to another characterization, due to Rasek, that E is an infinity topos if and only if it's presentable, and every morphism in a local class is represented by an internal infinity category object. All right. So that's a pretty story. So around this point, or maybe earlier, people start wondering, well, you know, we're talking about infinity topoi, which are analogs of growth-intensity topoi. What about elementary infinity topoi? And my answer has always been that I don't know what elementary means. And other people know what elementary means. I don't know what elementary means. It often has a connotation of sort of using finite constructions. People talk about first order logic or things like this. That's kind of hard to do an infinity topoi, because if you notice here, infinity is in the name. In infinity categories, it's very hard to get away with anything that sort of has a truly finite nature in the sense of say first order logic. I don't know how you would do that. Nonetheless, people have proposed definitions of an elementary infinity topos. This is Rasek's definition. So you say it's an elementary infinity topos if it's finite, complete, and co-complete. It has a sub-object classifier. And then this property, every morphism is contained in a local class, which is represented by an internal infinity category object. That's one possible definition. You want this axiom, by the way, because it's not implied by the last axiom. Every monomorphism will be contained in some local class, but that local which is represented by an internal infinity category object, but there may not be a single one that represents all the monomorphism. So you get that extra property. Rasek shows that this is good enough, for instance, to show that your infinity category is locally Cartesian closed, for instance. So you do actually recover things that you would like to have. I haven't talked much about the Cartesian closure of infinity topoi. They're Cartesian closed and even locally Cartesian closed. All right. Okay. So I'm going to shift gears now. I want to talk about some particular examples of local classes that are important. So let's start by reminding ourselves what monomorphisms are. So in an infinity category with pullbacks, F is said to be a monomorphism if its diagonal is an isomorphism. Now, in any infinity category, we can talk about a pair of maps being morphisms, being orthogonal, as we can do in a one category. So we'll say that the two maps are orthogonal. If a unique lifts exists in any commutative square where the left and right sides are F and G. So the top bottom can be anything and there's a unique lift making in any situation. There's a unique dotted arrow making both triangles commute. Another way to formulate this is that you can form a commutative square of hom sets for an infinity groupoid mapping spaces where you have composition with G or composition with F. And this should be a pullback. That's an equivalent condition. Unique in an infinity category means unique up to contractable choice. That is, there's an infinity groupoid worth of choices of lift. We want that to be contractable. That is equivalent to the terminal infinity groupoid. It's equivalent to that diagram on the right being a pullback. So we can define a cover in an infinity category to be a morphism which is orthogonal, left orthogonal to any monomorphism. There are other terms used here. The most common one is actually effective epimorphism instead of cover. And some people even say surjection. However, I need to emphasize something which something that makes this kind of an awkward term is which is that covers are usually not epimorphisms. In the first hour, I pointed out that epimorphisms and infinity group oids are kind of rare and a little bit strange. Covers are not generally epimorphisms. So effective epimorphism for that reason is an inconvenient term. For example, if I'm in infinity group oids which is sort of the homotopy theory of spaces, then a map turns out to be a cover if and only if the induced map on the sets of path components is surjective. These are the sets of path components. Or one way to say that is you can lift a point in the space up to homotopy. That's what a cover is in spaces. The class of monomorphisms is stable under pullback because it's defined using limits. A map is a cover if and only if a unique lifts in every diagram where I only need to use squares where the bottom is the identity of B here because I can pull back everything to that set to that case. The thing about a monomorphism is that we're actually asking about maps in the slicer. The maps in the slice from F2G is already either empty or contractible. I don't know why I wrote something wrong here. Let's put this in because that's what I meant to say. Okay. A lift exists. I'm sorry. I guess I'm not thinking correctly. This was correct, but I haven't completed the thought. Okay. Because this is a monomorphism, maps of anything into G in the slice is either empty or contractible. Sorry. I'm trying to interpret my notes here, which are not understanding. Probably somebody can figure out what I mean to say. Oh, I get, let me sort of just cut through this. Because of this, it turns out the uniqueness condition is not necessary. And ultimately, we get a statement which has this form. F is a cover if and only if in the slice category, the space of maps is non-empty and therefore contractible. What's it saying non-empty for all monomorphisms into B? In other words, if that's a cover, that's the case. That sounds right. Does that sound right? I think that sounds right. Yeah, that's what happens when you have a suggestion. No, that's wrong. I'm hit exactly backwards. I'm sorry. I apologize. I think I'm confused about something here. What do I mean to say? I think what I mean to say, let's just say what I mean to say. This is non-empty only if G is an isomorphism. Assuming that G is a monomorphism. That's a cover. So the point is there's a very elementary description of what a cover is. It's like games or that property. So F is a cover and E if and only if it's a cover in the slice because of this pullback property. I want to note one other condition that you have. If a composite is a cover, that implies that G is a cover. If I have a commutative diagram like this and that's a cover, that automatically implies that that arrow is also a cover. This is kind of interesting. This is a kind of asymmetry with monomorphism. So I told you in the first hour that in the infinity world monomorphisms do not have the property that if something factors as the first map through a monomorphism, then it's also a monomorphism, but covers do have the complementary property. Okay. There's a consequence here. If I have any co-limit in my infinity topos, actually this is in any infinity category, then the induced map from the co-product of evaluations at objects in I is a cover. I've described a proof here. I'm going to pass over that. In a presentable infinity category, the classes cover and mono of covers and monomorphisms form a factorization system. They're mutually orthogonal in the sense that I described. Both classes are stable in the retracts and every morphism can be factored as a cover followed by a monomorphism. So for any morphism from A to B, you get a factorization, which I'll probably write like this, where I have a cover and a monomorphism. I'll call the intermediate object the image. This factorization is essentially unique, meaning unique up to contractible choice. I might just say unique. The only notion of uniqueness you have in the categories. This is the replacement for epi-mono factorization, but these are not, remember, covers are not epis. Now, in an infinity topos, you can actually construct cover mono factorization directly using the check nerve. I'll sketch this briefly. If I have a morphism F, I can form the check nerve. This is what's called an augmented simplicial object. So it's a functor from the right cone of the delta-Op, the category that indexes simplicials. That's for instance. And it looks like this. The augmentation is the map F, and then you complete the diagram by putting in iterated fiber products of F along itself. This is really a kind of right cone extension along the inclusion of the subcategory that just has this map. So what I can do is, well, I can take the co-limit of the check nerve restricted to delta-Op, the subcategory, the simplicial index and subcategory, and that gives me a factorization. The co-limit maps to the cone point, which is B, and it has a map from A, because that's one of the objects in the diagram I'm taking a co-limit of, and this actually turns out to be the cover mono factorization. So that's the image of my map. This is actually precisely analogous to something you can do in a one-topos, except in fact the same thing is true in a one-topos. Only in a one-topos, you usually cut off the diagram right here, because the part of the diagram to the left of that line is irrelevant when you're calculating this co-limit in a one-topos. I've sketched a proof out here. I'll just tell you some bits of the argument. So to show that this works, well, without loss of generality, I can assume that it's a map from A to the terminal object, because I can work on the slice in a two-topos. So here's what the check nerve looks like. If I take the product with A everywhere, just the constant diagram A, then I have some extra structure here. I have some additional maps that go in the reverse direction, which is classically called a contracting homotopy for the augmented simplicial object. So really, I'm extending the functor along some inclusion to a larger category, which indexes that. It turns out that diagrams like that are given an example of an absolute co-limit in infinity categories. Every composite that factors through this extension is a co-limit cone in any infinity category where you can do this. No other conditions needed. This is precisely analogous. In fact, it probably implies a more classical statement in one category about split co-equalizers, which probably many of you are aware of. Split co-equalizers are an absolute co-limit in one category. Split co-equalizers are not an absolute co-limit in infinity categories. You must instead use this simplicial object with a contracting homotopy if you'd like a similar result. Anyway, you can use this to derive the result. So E here is going to be this co-limit. If I take A times E, I can put it inside the co-limit because I have universality of co-limits. So A times CF, but C, that's a co-limit of iterated products. And anyway, I just showed you that that co-limit was going to be A. So the projection map from A times E to A is A. Then I can do the same thing by taking E times E, put it inside the co-limit. Each term of the co-limit involves at least one copy of A, A times E is A. So in fact, I can use that to show I get E. So actually the projection map from E times E to E is an isomorphism. Therefore, the diagonal map is an isomorphism. So E maps monomorphically to one. And then because E is a co-limit, because of something I said earlier, the co-product of all the values of this diagram, mapping to E is a cover. But actually it's a little bit better. All those maps factor through this given map. They factor through A. And therefore we can conclude that P is a cover. That's the proof. That's a fairly straightforward proof. It's really something that you could have done in a one category. I only used universality of co-limits. I did not use descent in this argument. All right. So a cover, covers give you another characterization of local classes. So if I have a class in the Cartesian arrow category, it's a local class if and only if it has these three properties. It has to be closed under co-products. That's the first property. And second of all, for any pullback diagram of that shape, well, if G is in the class and F is in the class, but also if P is a cover, if the bottom map is a cover, if you pull back along a cover, then F as an L implies G as an L. So this third condition says that if a map is locally in the local class, then it's in the local class, hence the name. All right. I apparently have a few pages here where I describe, oh, sorry, here's a consequence, first of all. Suppose I have an arbitrary morphism in my infinity topos. I can define a class of maps. So L or F will be the class of all maps, G, E double prime to B double prime, such that the following is true. There exists a diagram of the following form. There exists some pullback of G along a cover, for which is also some kind of pullback of F. So G looks locally like F. An immediate consequence of the characterization I just gave you is that LF is a local class. In fact, it's the smallest local class containing F. Every local class is contained in a bounded local class, and this is the smallest, therefore it's also bounded. So there exists, which we might call a universal family, which classifies maps, which locally look like F. Okay. So here I'm giving a characterization of local classes. Let me sort of run through this very quickly. So in one direction, I want to show that local classes have these three properties, and two of them are actually immediate. Local classes are closed under co-limits. In particular, they have to be closed under co-products, and base change is part of the definition. The key part is to show that in the local class, you have this pullback along cover property. So the idea is if you have a cover, well, it's the co-limit of a check nerve, of its own check nerve. So if I have a G, which is in my, and I want to know, I want to figure out if it's in a local class, but I know it's pullback is in a local class. I'll just pull back the whole thing along the check nerve. So I get some business like this, actually get another check nerve. The stuff running along the top is also a co-limit. That's the universality of co-limits. F is in the local class. Therefore, so are all its pullbacks, because that's part of the definition of the local class. So all these FK are in L. But then I get G by forming the co-limit along this carcassian natural transformation of some special objects. So G is in the local class as well. That's how you prove the locality property. And here's the proof in the other direction. You're already, so local class at the closed under base change and closed under co-limits in the carcassian arrow category. Base change is automatic. So we already have that's closed under co-products. If I have a general carcassian natural transformation from some category with the property that each morphism that natural transformation is in the local class. So really I have a functor into L viewed as a subcategory full subcategory of a carcassian arrow category. Well, I can form the co-limit. So that's a morphism in the carcassian arrow category. It's just a morphism. All of these because of the descent property for every gi, for every i, each of these diagrams is going to be a pullback. That's what descent gives me. I form the co-limit of a carcassian natural transformation. And then I pull back. I pull back again. I get where I started. Therefore, if I take the co-product of all these maps indexed by i, that's also a pullback and that's also using descent. But p is a cover. So, oh, well, by hypothesis, the f i's are all or the phi i's are all an L. That's that class is closing under co-products. That's one. That's the first property. And then, well, property three gives me that g is in the class. So that shows that the class is closing under co-limits in the carcassian arrow category. Anyway, the point of giving these arguments is to show that all I'm ever really using is university of co-limits and descent. Those two properties really let you prove all these things. I'm coming close to an hour. So, even though I have more in this particular lecture, I might pause very soon. These ideas also show that both monomorphism and covers are themselves local classes and I've sketched the proofs here. All right. Let me see how much more I have here. Let me do... Let me take about three more minutes and I'll talk about truncation. So, as I've said, the notion of monomorphism is the first of a sequence of conditions called intruncated. So, remember, in an infinity category, a map is intruncated if the iterated diagonal... So, you iterate the diagonal construction on a map. If the iterated diagonal is an isomorphism, I have to do it in plus two times. Then an object is intruncated if the map to the terminal is intruncated. I'm going to write c less than or equal to n for the full subcategory of intruncated objects. Let me notice something here. If I have an intruncated object, then if I look at the mapping space into that intruncated object, that has to be an intruncated infinity groupoid. That is an n groupoid. So, this full subcategory of intruncated objects is an example of what's called an n plus one category. Or a better term is probably n plus one comma n category. Luri's term is n category. That is, it's a category whose mapping spaces are actually just n groupoids. So, I get from this this chain of classes of morphisms in my infinity category. Isomorphisms, the monomorphisms are the minus one intruncated maps. And by orthogonality, I get a complementary classes of in-connected maps. I don't seem to have the definition of in-connected here, but I hope it's obvious. If is, oh, it's right down here, a map is in-connected if it's orthogonal to the n minus one intruncated maps. There's other terminology that's used here. Sometimes this is called in-connected. And sometimes this is called n minus one connected. So, that's great. I'm going to follow Jacob Luri and call it in-connective, so I don't have to deal with this confusion. And object is in-connective if it's mapped to the terminal object is in-connective. In a presentable infinity category, it's a factorization system. In particular, you get factorizations, essentially canonical of any map into an n plus one connected map followed by an in-truncated map. I'll call it the n image. I'll call it the relative entruncation. And then if I apply that to the map to the terminal object, it's called the absolute entruncation. I'll call it t r in f of x. So, it's really just a special case of the first one. As with cover and mono, these are local classes. I'll note one more thing. They come in a tower because it's a nested sequence of classes. So, if I take an object, I have associated to it a tower of truncations. So, for example, in infinity group voids, a space is truncated, entruncated if and only if its homotopy groups are trivial. So, for every choice of base point in the space and every k, strictly bigger than n. And similarly, a space is in-connective if and only if its homotopy groups are trivial for every choice of base point and every k less than or equal to n. Here, I have to be careful. I have to also make sure that the space is not empty. Again, assuming n is at least zero. This is what's classically known as an n minus one connected space. There's an off-by-one in the terminology. And then you discover that in infinity group voids, a map is either entruncated or in-connective if and only if all its homotopy fibers over all points in y are entruncated or in-connective, as the case may be. This condition in homotopy theory is classically called an in-connected map. Now, you see the source of the confusion involved in the terminology comes from homotopy theory. In spaces, classically, there's a construction of the entruncation of a space. You kill off the homotopy in high dimensions by attaching cells of large dimensions. In a presentable infinity category, this construction exists formally. I'll note here that if you want to compute this in a pre-sheaf category, you can just compute it pointwise because truncation is given by a limit condition being truncated. If you just truncate pointwise, it's a functor, and it turns out to compute the truncation in pre-sheaves. And then if I have a left exact localization, I can use this adjunction to compute the truncation in e in terms of the truncation in pre-sheaves. And that's because both l and i preserve the property of entruncation because they both preserve finite limits, in particular pullbacks, which is all I need to define entruncation. So to entruncate an object in e, you just entruncate it as a pre-sheaf and then sheaf of oil. It's okay. I will pause here and then we can have our break. Let me finish my discussion of truncation and connectivity by briefly describing an example. Just so you can see how some of these fit together and they relate to other mathematics. So let's think about n-jerbs. I'm going to follow Lurie here. I'm going to define an n-jerb-n is the intersection of the classes of entruncated and in-connective maps. According to my sources, apparently these are just the only verb of a plain n-jerbs and that n-jerb is actually something much more general, which I wasn't aware of. I'll just call this. This is sort of the first non-trivial example of an intersection. If I took the entruncated maps and the n plus one connective maps, the intersection is just the isomorphisms because there are orthogonal classes. So this is sort of the first non-trivial example, not just isomorphisms. So if I have an infinity to post e, let's write e sub-jerb-n for the full subcategory spanned by maps from e to the terminal object, which are in-jerb-n. So the interesting property of this full subcategory, it's an infinity category, but the interesting property is that it's almost a one category in some sense. So what is actually true is that if I look at the infinity category of pointed objects in e sub-jerb-n, so things equipped with a section, this is actually is a one category, really equivalent to a one category, what I will say is. In fact, you can describe it, this may look like a familiar story. The pointed n-sherbs for n-guerniquetu are equivalent to the one category of abelian group objects. In the one category, e less than or equal to zero of zero truncated objects in your infinity topos. If n equals one, you use group objects, and if n equals zero, there's not much to say, you just say you're using pointed objects. As we'll see, e sub less than or equal to zero, these zero truncated objects, that is an example of a one topos, as we will see. So these are abelian groups of objects in a one topos associated to our infinity topos. So the construction of, I won't prove this proposition, but I'll construct the functor from e sub-jerban star from the pointed n-sherbs to the zero truncated objects. So I won't give the abelian group structure. Well, if I have this pointed object, I can take the infold, iterated diagonal of the point inclusion, or it be mapped from the point. Let me be careful and not call it an inclusion, because it may not be a monomorphism. Nonetheless, I have this diagonal. I generally, I'm going to write omega n of s for the target of the iterated diagonal. That turns out to be, well, p is n truncated, and you can use that to show that if I take the infold, iterated diagonal of s, s is actually n minus one truncated, and therefore this thing, this object is actually going to be zero truncated. I call it omega n, because in fact, in spaces, this is the infold, iterated loop space functor of a base space. So that's the construction of that functor. And then this has an inverse functor, which takes an abelian group object to something that's usually called KAN. That's the I-Wimber-McLean object associated with A. So for instance, suppose I pick my abelian group object, then I get an I-Wimber-McLean object in E, it's actually an n-Jerub. As we saw, to associate to every map, there's a universal family classifying the maps that locally look like your given map. So we get a universal family, uan star to uan. It's a universal family of Gerbs, because Gerb is itself a class, so it's in that class, but it's of Gerbs that locally look like p. So for instance, just to orient yourselves, if you know something about Gerbs, here's this universal family. We have our sort of typical example, which is the pointed Gerb, associated to the abelian group A. This actually factors through another Gerb, which is the inclusion of a base point into KN plus one. That's also an n-Jerub. So we have pullback diagrams like this. There's a pullback diagram also like this. This is not the trivial commuting diagram. This is the pullback diagram that relates KN with KN plus one. One is loops on the other. I mentioned this just because this thing in the middle, p is not a universal family. It's merely a pullback of a universal family, but it does classify something. It classifies Gerbs banded by A. A notion I won't mention, but this is just to orient you, if you've heard about Gerbs. Abanding is a structure you put on this Gerb, which in some sense is a local identification of its fibers with KN in a certain sense, a choice of identification. Anyway, so for instance, we get a very clean theory of these EM Gerbs from these notions in an infinity dopus. Here's one more example that I'd like to mention. It's the class of infinity connected maps. So the classes of in-connective maps form a descending chain of classes. So we can take the intersection. So these are the things, if you like, they're the maps such that the image factorization, the N image is equivalent to the co-domain for all N. It's an intersection of local classes. So it's also a local class. We'll say an object is infinity connected if the terminal object is. In infinity topoi, like infinity group oids and more generally sheaves of infinity topoi, infinity connected is the same as isomorphism. That's some version of the Whitehead theorem in homotopy theory. We equivalences are by definition determined by homotopy groups. That's a consequence of this fact. The interesting thing is that in an infinity topoi, there can exist non-trivial infinity connected objects. And I'd like to give you an example. So here's my example of a non-trivial infinity connected object. First I need the infinity topos. I'll start with a topological space which has this lattice of open sets. I could also give a point set for the topological space, but I don't need to tell you what a sheaf is. So I'm going to tell you the locale. So I can define a pre-sheaf of infinity group oids on this so that its values at u0 plus minus u1 plus minus u2 plus minus are equivalent to the terminal object and whose values at the v's are an Eiland-Berg McLean space. Let's say kzn or zzn address just to be specific. That's an Eiland-Berg McLean space. So of course I'm going to do it so that it's actually a sheaf. So here's a picture of this functor. The squares that I'm going to draw here are going to be pullback squares because kzn is the homotopy pullback of such a diagram. Or more classically it's the loop space in kzn plus one. So this is actually a sheaf on x which as I've told you is an example of an infinity topos. Now the claim here is that this f is actually in infinity connected. However it's not the equivalent to the terminal object. You know that because it's a sheaf but you know its values are not contractable. It's values at the v in so it's not the terminal object. It's not equivalent to it. How do you prove something like this? Well if I want to compute the let's say the m truncation of f for any m in sheaves. Well what I do is I compute it in pre-sheaves on open subsets and then I sheafify. And I compute the truncation on pre-sheaves point wise. Well the values are either already contractable or are these Euler-Berber claim spaces. But the m truncation of an Euler-Berber claim space for a fixed m this is going to be trivial when n is large. I guess n is bigger than m. So if I fix an m and I trunc- m truncate everything in this picture up here almost everything becomes contractable. The first few values at v aren't but eventually they'll all be contractable and then it's fairly easy to show from that point that the sheafification will just be the contractable object because you can actually recover the sheafification without the first few values at the v i's. But in order that I won't show you can explicitly say what l looks like in this case. So that's an example of a non-trivial infinity connected object in an infinity totem. So that is a phenomenon that does happen.
In this series of lectures I will give an introduction to the concept of "infinity topoi", which is an analog of the notion of a "Grothendieck topos" which is not an ordinary category, but rather is an "infinity category". No prior knowledge of higher category theory will be assumed.
10.5446/55523 (DOI)
So I want to have a sort of a grab bag of topics in this last lecture. So I've talked about infinity topoi, characterization terms of descent, local classes in university families and truncation and connectivity. So I've noticed that the I can look at the full subcategory of in truncated infinity group oids. And so these are what are called in group oids or sorry in. Yes, these are called in group oids. S less than or equal to zero is equivalent to the one category of sets. S less than or equal to minus one is equivalent to the one category actually post that of propositions. S less than or equal to n is an example of an n plus one category or as I said it's really in plus one comma one category, but I'll use Larry's terminology it's easier. So in this setting and is an n plus one category of all of its mapping spaces are in groups oids, that's the definition. So just as we can look at pre sheaves of infinity group oids, we can look at pre sheaves of n group oids I'll say n minus one group oids. So see will be an art of some infinity category probably small. And I can look at functors on see up with values in n minus one infinity group oids. This is actually the same or equivalent to the full subcategory of n minus one truncated objects in the pre sheaves category pre sheaves of pre sheaves valued in infinity group oids. That's because truncation of pre sheaves is computed point wise. So I'll write this as pre sheaves see less than or equal to n minus one. One more fact about this on if you have an infinity category, see, then there's a best approximation to an n category. So I'll call that thing the map from see to hnc so hnc is an n category. And this map is initial among functors to n categories. So it turns out this functor category is equivalent to a functor always equivalent to a functor category on some n category. So I talk about pre sheaves of a category pre sheaves of n minus one group oids I might as well assume that the domain is an n category. That last general that was that can do that. So we can define and now obviously to infinity topos the notion of an end topos. So it's the same definition. Except that I replace the role of infinity group oids pre sheaves of infinity group oids with pre sheaves of n minus one group oids. So it's an infinity category ease if there exists a small infinity category. I'll see an accessible fully faithful embedding of e into pre sheaves of n minus one group oids which has a left adjoint which is left exact. And as I said before I can actually without loss of generality assume that C is itself an n category if I wish. So for example, a one topos is exactly well the same in the sense that it's the infinity categorical in an infinity categorical language it's the same as a groten-deke topos. Every one topos in this sense is equivalent to a groten-deke topos in the classical sense because when n equals one that's pre sheaves of C less than zero sets the zero truncated infinity group oids are really pre sheaves of sets. A zero topos is exactly the same as a locale or really a frame or a complete hitting algebra for your synonyms. If you start with an infinity topos and you look at the n minus one truncated objects that is an n topos. That's pretty easy to assume remember let's suppose that our infinity topos that's easy to prove. Suppose our infinity topos is let's pick a left exact localization so presentation is left exact localization of some pre sheaves category. So I have a full subcategory of n minus one truncated objects in e but I have a similar thing in pre sheaves and remember that truncation is characterized by a condition on the iterated diagonal. So it's a limit condition in terms of finite limits and both l and i preserve this condition. Therefore they preserve truncated objects therefore they restrict to functors on the subcategories which are necessarily have the appropriate properties as you can prove. Certainly i is fully faithful you can show it's accessible and l is still left exact. So for instance for any infinity topos the zero truncated objects are a Grotendig topos and the minus one truncated objects which is the same thing as sub objects that the terminal object is a local. Opposite of a local. Furthermore there's a fact I won't try to prove it now but any n topos is an n minus one truncation of some infinity topos. I'll warn you here so when you have an infinity topos e you will call e less than equal to zero the underlying one topos but that is not a complete invariant you can certainly have different infinity topoi which have the same underlying one topos. Easy example comes from infinity group oids. Take your favorite infinity group oid then you can form the slice over that infinity group oid and that is an infinity topos. Now I can take the zero truncation that is equivalent to the functor category from the fundamental group oid of your space into sets. It's the fundamental group oid. So the zero truncation of this infinity topos only depends on the fundamental group oid of x. However, the infinity topos itself actually depends on x itself. Fundamental group oid is certainly not a complete homotopy invariant of spaces. Or for any infinity topos we get this chain of full subcategories of objects with various truncation levels. And we have left adjoints going back which are called truncation functors. TR1, TR0. The truncation functors are not left exactly. They do preserve finite products but they're not left exact. So at this point I'm going to return to this question that I introduced in the first lecture which is where the, or second lecture I guess, where the growth and deac topologies in this game. So I have to think generally about left exact localizations. So let me first establish some terminology. So by a localization, and I think reflection is a better term here, but it's sort of standard to call localization. So it's a picture like this. You have a fully faithful embedding with a left adjoint. I can identify, because it's fully faithful, I can identify D with its essential image. And so in practice I will just do that. I will assume that D is a subcategory, a full subcategory. When you have a localization you have something that all calls a kernel. This may have some standard name but I couldn't figure out what it was. I'll just call it the kernel. That's the class of morphisms in E that are inverted by the left adjoint. So it's the class, it really is. So it's the class of morphisms that L takes to an isomorphism. Any kernel has the property of being strongly saturated, assuming E has co-limits. Isomorphisms, of course, are in the kernel, sort of logically. It has the two out of three property. So if I have two morphisms and a composition, and any two in T implies all of them are in T. And lastly, this kernel is stable under co-limits computed in the arrow category. So if I compute a co-limit in the arrow category, it's actually also in T. So every localization determines a strongly saturated class. You send the localization to its kernel. That's an injective correspondence in the sense that from the kernel you can recover the localization. The full subcategory is the class of what are called local objects. So it's the class of objects such that the map induced by mapping by any T going to X as an isomorphism for every morphism in the class T. If E is a presentable infinity category, then you can classify the accessible localizations using the theory of presentable infinity categories. These correspond exactly to those strongly saturated classes that are generated by a set. So T equals S bar for some set of morphisms. It's a size restriction. So this S bar is the strong saturation of S. It's the smallest class of morphisms containing S and closed under those properties. Two out of three isomorphisms, two out of three colons. So this is a correspondence. For any set, you can build localization whose kernel is the strong saturation of that set. If I want to think about left exact localizations, so those are going to, oh, this should probably be accessible. And these are strongly saturated classes that are the saturation of a set. But the left exact property corresponds to the classes where the kernel is in addition closed under base change. It's closed under taking pullbacks along the map. That's proved by an elementary argument, which actually works just as well in the one category of setting. In general, right. So the key question that one would like to answer and have a good answer for, if you have an infinity topos, how do you classify the left exact localizations? There's some things to say about this. I will not say everything there is to say. I think there's going to be a talk in the conference next week about this exact topic. But I'll tell you a little bit. So let's suppose I have an infinity topos. Let's suppose I have a left exact localization and I'll call it this kernel T. So that T gives you a local class. I think of the kernel as a, I'm thinking of it as a full subcategory of the arrow category. So I'll take the intersection with the Cartesian arrow category. That intersection is a local class. That's actually very straightforward to prove using the left exactness of the localization. Certainly close under base change. And it's closed under a co-limits in the Cartesian category because in fact it's closed under co-limits in the arrow category. As a consequence, we have the following. If I have a pullback square where I'm pulling back along a cover, then if F is in the kernel, then so is G. So that's a property of kernels because they are local classes. Now still assuming I have an infinity topos in the left exact localization with kernel T, I have the following interesting condition on elements in the kernel. If I take a morphism F and I make its cover mono factorization, so I is a monomorphism, P is a cover, then the morphism is in the kernel. The morphism F is in the kernel. If and only if I is in the kernel and the diagonal of F is in the kernel. So this is a more or less elementary argument using left exactness. I'll give it sort of very briefly. So in one direction, suppose F is in the kernel. Well that means that L of F is an isomorphism, L is the chief of the left accurate. L of a monomorphism is automatically a monomorphism because it's a left exact localization. The hypothesis is that L of F is an isomorphism. And now all of a sudden when I apply L, I've got a monomorphism which has a section, the inverse of L of F composed with L of P, well it is true even if any categories that monomorphisms with sections are isomorphisms. So this tells you that I is in T, L of I is an isomorphism. And then the other property is straightforward because L is left exact, the diagonal of L is L, sorry L of the diagonal is the diagonal of L. So you can use that to show the delta F is T. In the other direction, I want to show that if these two things are true, then F is in the kernel. So first of all, if I think about the diagonal of F, which is a map from A to A times over B A, I can factor that through the diagonal of P where P is part of this monocover factorization. The second map in that factorization is actually a pullback of I in some sense, it is itself a monomorphism. And because it's formed by pullbacks from I and the localizations left exact, and since if I assume that I is in T, then so is J. And since I'm going to assume here that the diagonal of F is in T, I actually have the diagonal of P is in T. I want to get to showing that F is in T, well, I also have this diagram on the bottom. Here's the diagonal of P, it maps to this pullback, which is the square on the right. Well if the diagonal of P is in T, since it factors with Q, it factors the identity, two out of three implies that Q is in T. But now I also have this P here, which is a cover. And I just prove this property that if this property of being in T is local, so because P is in, is a cover, and Q is in T, I get that P is in T. And since I've already assumed that I is in T, that's where my hypothesis, the composite is in T. Just thought I'd go through that argument. I think I didn't use, don't think I really used anything here that wouldn't work in say a one-topos. As a consequence, if I have one of these left exact localizations of an infinity topos with kernel T, for any N, any finite N, the, well, I haven't defined that for infinity, but the intersection of T with the class of in-proncated maps is determined entirely by the intersection with the monomorphisms. Or to make it more explicit, if I have an in-proncated map, which means that it's N plus two iterated diagonals, niceomorphism, it's in the kernel if and only if the sequence of maps I0, I1 up to N, IN minus one is in T, where those are constructed inductively. I guess like this. So these are the coming from, let's just call this DK. These are parts of the cover mono factorizations of the various iterated diagonals. That's what we've learned here. Let's make a definition. This definition is due to Lurie, a topological localization of an infinity category is a left exact, I should have also said accessible here. Actually, no, I don't have to say accessible here. Every left exact localization of a left exact localization, sorry, a topological left exact localization. Start over. A topological localization of an infinity category is a left exact localization whose kernel is generated by some set of monomorphisms. That's a definition. And the theorem you can prove is that every left exact localization of, oops, I should say something's a little bit wrong here. This is of an end topos. Every left, left exact localization of an end topos with N less than infinity is topologically. That's a combination of the fact. The fact that I proved here that the class, the kernel, the intranquated maps in the kernel are determined by the ones that are monomorphisms. And the fact that all maps in an end category like an end topos are truncated at some level. They are all in fact, in truncated. N minus one truncated. It doesn't matter. They're all finally truncated for some given value. So what we're actually interested in are the topological localization of the pre-sheave category, meaning pre-sheaves of values in infinity group loads. So if I have a topological localization, by definition, that means that the kernel is generated. It's the strong saturation of a set of monomorphisms. Now monomorphisms form a local class. In the pre-sheave category, we have a collection of objects which are the representable pre-sheaves and they're generators for that category in the usual sense, the infinity category. Everything's a co-limit of some small diagram of representables. So if I have a topological localization, I can argue so that I can replace my set S with a particular kind of set, a set of monomorphisms whose targets are the representables. I actually have to use the fact that the kernel is a local class in order to do this. So what I learn is that every topological localization of pre-sheaves determines a groenteenth topology on the infinity category C. So what's a groenteenth topology on C? Well you can define it as a collection of sieves, that is to say a collection of sub-objects of representable pre-sheaves, which satisfy a list of properties which I won't give, but it's a standard list of properties that is familiar as one of the standard ways of describing a groenteenth topology on a one category. Very often people choose to describe the topology using a different language, using covering families, but any family of maps to an object and C determines a sieve. So you can reformulate in terms of sieves. I'll note here that this is an infinity analog of the notion of a groenteenth topology, but it doesn't actually require any deep ideas. These are actually in bijective correspondence to groenteenth topologies on H1 of C. So remember every infinity category has an associated sort of initial one category that it maps to. In this case it's called the homotopy category, the infinity category. And so in fact groenteenth topologies on C correspond to groenteenth topologies on the homotopy category. So they actually correspond to conventional groenteenth topologies there in the usual sense. That's because you can read off sub-objects of representables from the homotopy category. So the conclusion is that every topological localization of pre-sheves is of the form sieves Cs for some infinity site. So associated to an infinity site, which is an infinity category with a groenteenth topology, you can describe a full sub-category of sheaves. This will be exactly the collection of pre-sheves. This is an isomorphism for every morphism in the set S of things in the groenteenth topology. So that characterizes the topological localizations of pre-sheves. Now I can do something similar when n is less than or equal to infinity, that is I look at localizations, left exact localizations of pre-sheves of n minus 1 group whites. In this case every left exact localization is topological. I think I said something like that. So here every left exact localization is a category of sheaves for some insight. So here C is an n category and S is a groenteenth topology on the n category. If n is less than infinity, everything is a category of sheaves on a site. In the case of n equals 1, this is the classical statement about groenteenth topology, which is usually actually taken as the definition. Now of course I've emphasized that in the infinity case, this is only the case for the topological localizations. There's also a notion of cotopological localization, a cotopological localization of a presentable infinity category that's an accessible left exact localization with the property that its kernel is such that the only monomorphisms in the kernel are the isomorphisms. So it's sort of as far as you can be from being topological. And then you can show that an accessible left exact localization of an infinity topo says cotopological if and only if its kernel is contained in the class of infinity connected maps. The proof I won't write it out, but it's by the same ideas that I've already shown that show that the topological localizations are determined by monomorphisms. Oh, actually, there's a sketch of a proof here, but I'm not really going to give the full details. In one direction it's obvious if the kernel is consistent, actually in one direction it's obvious if it's cotopological, then well, you already know the infinity connected maps are which are monomorphisms are the isomorphisms. In the other direction, if the kernel contains infinity connected maps, you have to show it's containing the n connected maps for all n. What I'm going the wrong way around, I think this page is again messed up somehow. Proving this direction. You're approved this direction. Very good. Well, so if you have a cotopological localization, you first show that all elements of the kernel are covers. To do that, you just use the epi-mono factorization or the cover monofactorization of f. If f is in the kernel, well, that means that i is a monomorphism. We've seen this before, but lv is a monomorphism, therefore, and l of f is an isomorphism. You can use that therefore to show that l of all these things are isomorphisms because you have a monomorphism with this section. You can use that to show. It's cotopological. This shows therefore that i is in t, but therefore, because it's cotopological, i is an isomorphism. Therefore f is a cover. And then there's a similar argument that works inductively on n to show that t is containing the n connected maps for all n. You have to use a fact I haven't proved, but it's the fact that if you have a cover, then f is in-connective if and only if its diagonal is n minus 1 connected. This is an inductive argument that comes ultimately from the inductive definition of truncatedness. Let me give an example of such a cotopological localization. So if I have an infinity topos, we say that an object is hyper-complete if its projection is orthogonal to all the infinity connected maps. This determines a full subcategory of hyper-complete objects. And that full subcategory is an example of a cotopological localization. In fact, it's the kernel of this localization is exactly the collection of all infinity connected maps. So this is in some sense the maximal cotopological localization kill all the infinity connected maps. So it's a formal consequence of these definitions that if you have an in-truncated object, which is automatically hyper-complete. So the in-truncated hyper-complete objects are all the in-truncated objects if n is finite. So this hyper-completion is a phenomenon that is not seen from finitely truncated objects. So for instance, if you looked at sort of less than or equal to zero, all the objects are hyper-complete. Truncated objects are always hyper-complete. So for example, this example that I gave not in the previous lecture, but earlier, well, this hour I guess, of a particular topological space with an object that's not infinity connected, but not trivial, that's an example of an infinity topos that is not hyper-complete. It has a non-trivial hyper-completion because it has non-trivial infinity connected maps. Now, there's a theorem which does describe all the accessible left-exact localizations of an infinity topos. They all factor essentially uniquely as a composite of two localizations. First there's a topological localization and then there's a co-topological localization of that. Whereas a consequence, every infinity topos is a co-topological localization of sheaves on some infinity site. So the idea is that E will be a co-topological localization. And so the way it works is that L inverts at least some infinity connected maps. So, anyway, the consequence is that not every infinity topos is sheaves on a site, but there is this co-topological, everything is a co-topological localization. I should mention here, so historical things. So there's actually a class of infinity topoi that were constructed before the infinity categorical language by Andres Jall and then by Rick Jardine. These are model categories of simplicial pre-sheaves on a one site. These model categories give you infinity categories and these actually give you hypercompletions. They construct the hypercompletion of what I'm calling sheaves on the site. Sheaves on the one site viewed as an infinity site, of course. The one site is actually a kind of infinity site. So that's because if you look at their construction, they actually define their weak equivalences in terms of a notion of homotopy group. So their notion of homotopy group cannot see infinity connected objects, by the nature of homotopy groups. This ultimately leads to a question, is every infinity topos equivalent to some infinity category of sheaves on an infinity site? It may seem like I've told you the answer is no. But what I told you is that there are examples of infinity topoi which I've constructed, which admit non-trivial cotopological localizations. In some sense, the proofs I've given you tell you that the canonical site, the canonical site of an infinity topos, might not give E, might not have E as a topological localization. The obvious thing to do is to take a large, a small, but a full subcategory of E that's closed under finance limits, show that it's a left-exact localization of pre-sheaves on that, that's a canonical site. Those are often not topological localizations for a random infinity topos. So it's an open question, I believe, as to whether this is true. I thought for a long time, this, of course, the answer is no, because look at these examples I have, but they weren't actually examples of this. It seems very hard to address that question, because I don't know anything that distinguishes the infinity topoi, which come from infinity sites that are sheaves and infinity sites. I don't know of any property those have that something else might not have. So I don't even know where to start to try to prove this. So in principle, it's possible that the answer is yes. I think the answer will likely be no, but who knows? Okay. I see here, I'm still going a little bit over on time. I do want to talk about geometric morphisms. And this is very straightforward, because you just do the things that work for one topoi. So a geometric morphism is between infinity topoi is a functor, sorry, F is an adjoint pair of functors, whereas the left adjoint is in fact left exact, there's finite limits. This gives you an infinity category of geometric morphisms. The annotations people use here are fun lower star from E to F, but you can also have fun upper star from F to E. And that depends whether you prefer to think about the left adjoint or the right adjoint. This also gives you an infinity category of infinity topoi. And this has the property that the space of maps of infinity topoi from E to F is the maximal subgroupoid of this functor category. Of course, these are potentially large infinity categories. And this is a large infinity group in general. I actually showed you the recipe for computing geometric morphisms last time to compute, for instance, a geometric morphism to pre-sheaves on something. So let's compute the fun upper star, so the left adjoints. So those are going to be the co-lead preserving functors, which are also left exact. So that's a full subcategory of the category of co-lead preserving functors. Functors are the co-lead completion of C. So co-lead preserving functors from pre-sheaves are the same as just functors from C. The inverse is the left con extension along with the data. And you can describe what this full subcategory is. These are the functors, co-lead preserving functors, such that, well, they satisfy these conditions that I gave. It takes the terminal object to the terminal object, and F preserves the pullbacks of co-spans of representables. So this remarkably nice set of conditions. Consequences, well, an immediate consequence, there exists a unique geometric morphism to infinity group voids. So that thing there is the terminal infinity topos, infinity group voids. If you have an infinity topos, a point is just a geometric morphism from S. As you can do in one topoi, you can say that infinity topos has enough points if the left adjoints, the stalk functors, for all points are jointly conservative. But there's a warning that comes in here, and it has to do with the fact that infinity group voids are hyper complete. So the infinity connected maps and infinity group voids are just the isomorphisms. So if I define having enough points in this sense, then E can have enough points only if it's hyper complete. The stalks cannot see infinity connected objects, because infinity group voids doesn't have any. In fact, in practice, at least if you look at some of the things that Jake has done, you take as a definition, it has enough points if it's hyper completion, has enough points in the sense I've described. So that's an additional issue that shows up in the setting. Of course, even for a one topos, you could just fail straight up fail to have enough points. Oh, here's if you want to compute maps into a slice, there's a recipe. The maps of an infinity topos f into the slice of u over x, which is also an infinity topos, that corresponds to geometric morphisms to E together with a section of the pullback of f in F. As a consequence, this actually gives you a fully faithful embedding of your infinity topos into the slice in infinity category of infinity topo over E. You just send an object to its slice together with its forgetful function to E. Here's one more example. Torsers, this is sort of the classic example of a geometric morphism. Let's take a small infinity group or it might be easier to think of a group. So it has one object. Let's compute the geometric morphisms from E to pre-sheep. So I'll actually compute the left adjoints. These are going to be the the the the the functors from G opt to E, which satisfy my conditions, takes the terminal object to the terminal object and preserves pullbacks of co-spans of representables. The second condition is automatically satisfied because group oids have pullbacks. They just do because everything's an isomorphism. So I only actually need the first condition. So this will correspond to the functors P since the co-limit as a functor on G, I guess on G op is the terminal object. This is actually the correct definition of a G torsor in the context of infinity topoi. This may not look like what you would call a G torsor, but here's why. This condition is equivalent to the following. Well certainly implies the following. First of all, if I have such a functor and I have this property so that it's co-limit is the terminal object, then I obtain a cover as a morphism of E. I take the co-product of all the values of my functor at all objects of G and the map from the co-product as a cover by the general property I told you before. Second of all, I have another thing that comes from descent. So let's suppose I have one of these P with this property. Let's consider for any pair of objects in my infinity group oi G, I can associate to that a map which you can think of as the action. So I have Px, Py, and then I have the infinity group oi of maps from y to x in G. I guess in G, I guess that's in G. I can pull that back. Pi here is the projection. It's a unique geometric morphism to infinity group oi. So if you like, this is a map P times G to P if G is a group. I'm going to pull back G, so it's the action map. Now this fits together if I let y vary. So this actually is a functor, gives me a functor from G up to my infinity topos. The value at y is what I've written above. The x is fixed. So this is for each x. P of x times P star, the pullback of the representable pre-chief on G to E maps to P. Now I can form the co-limit with respect to G up. By definition, because it's a G-torser, the co-min of P is the terminal object. You can compute the co-limit on the left-hand side using universality of co-limits to get rid of this factor that's just constant. That's really P of x times the co-limit of the representable. Co-limits of representable funkers are terminal and pi star preserves co-limits. So in fact, this is just P of x. That's the co-limit of that transformation. Now the thing about this transformation, it's actually a Cartesian natural transformation of funkers from G up to E. For every morphism in G, you plug that into the transformation, you get a pullback spur, and that should be just because it's an infinity, it's a groupoid. So in fact, you get a square where both arrows are isomorphisms. Well now I have descent. It tells me if I form this co-limit and then I pull back again to each object, I get a pullback square. So I have a pullback square for every x and y. If I write this out just for a group, then it's really saying this P times the pullback of G to P to P to 1 is a pullback. In other words, this thing is equivalent to the product of P with P by sort of a tautological pair of maps, which are projection and if you like the action map. That recovers the classical definition of a torsor in the one category case, but it's in some sense a different definition in the infinity category case. So torsors that oddly have a much easier, cleaner definition in infinity. All right. I'm very low on time. Let me run through a couple of one remaining topic. There's something called n-localic reflection. So I have an infinity category of infinity topoi or of n-topoi and there's a functor which is given by take your infinity topos and restrict to the full subcategory of n-1 fruncated objects. That admits an adjoint, which hopefully this is correct is a right adjoint. I'll call this Rnf, let's call the n-localic reflection. You can always promote for any n, an n-topos to infinity topos in a canonical way. And here's the formula. By start with an n-topos, we said that those were always categories of sheaves of n-1 group oids on an n-1 site. I can actually pick this so that the underlying category C has finite limits. So the recipe to compute the n-localic reflection is you pick such a site with finite limits that presents your n-topos and then the n-localic reflection is sheaves of infinity group oids on the same site, which is also an infinity site. I guess I'm supposed to call this an n-site if I'm consistent. So that's the formula for n-localic reflections. Everybody forgets the fact, well I always forget the fact that you have to use a category of finite limits otherwise you get the wrong answer. Okay, so at the end of my time, I'll notice one more property here. You can define co-amology of infinity topos. So I have this unique morphism, geometric morphism to infinity group oids. So I have a pair of adjoint functors, constant and the constant sheet and the global sections. So if I take an Eileen Burbank Klein space in spaces, I can pull that back to E and then push it forward again taking global sections and then that defines the co-amology of my infinity topos with constant coefficients which are in a B-line group. We'll take the appropriate homotopy group of this thing. I can use various n here which are bigger than j. So of course, notice Eileen Burbank Klein spaces are in truncated. So this invariant, it's certainly not a complete invariant but it certainly does not distinguish say E from its own hypercompletion because the pullback of the Eileen Burbank Klein object is already going to be in truncated. This gives you the same answer for both E and its hypercompletion. However, this idea does lead to a very nice formulation of the idea of the shape of an infinity topos. Using this projection Q, you get this composite functor Q, upper star following Q, lower star and this is an example of a functor from S to S and it's in a full subcategory which is called Pro-S which we can define to be the collection of functors from S to itself which are left exact or more precisely we define it to be the opposite category. It's also, it turns out, a category of inverse limits of representables, filtered inverse limits so it's reasonable to call it Pro-S. It has a topological embedding by S itself by the Uneta functor and so every infinity topos has a shape which is a Pro space in the sense and you can prove for instance that if X is a nice enough space, if it's paracompact then shape of she's on X. Notice here I'm using space in the sense of actual topological space. If it's paracompact then its shape is in fact determined by its underlying homotopy type. Bar X here is the infinity groupoid which is the homotopy, the usual homotopy type of the topological space X. Every topological space has a homotopy type which you think of as an infinity groupoid. So paracompact topological spaces have sort of classical shapes but there's a general theory of shapes. Alright, since I'm out of time I'll have to stop. I wrote down some pages that talked about applications. An important one is you can talk about she's of infinity categories on infinity topos. By the way, do you want me to go on or do you want me to wrap things up? I can take five minutes and just fill in these pages. Yes, maybe just five minutes I think. Okay, I mean it's quick. I can just fill in the definitions. If I have an infinity topos and I have an infinity category that's complete we can just define a sheaf on E with values in A to be a functor from E op to A which preserves all limits, all small limits. If E is sheaves on a site you can reinterpret this in a more conventional way as sheaves on a site. So for instance I can define things like sheaves of infinity categories, maybe large ones, even on E. So the example I wanted to mention is like the type of example that I mentioned in the very beginning. Just to be specific if I have a scheme I have for every open set some derived category of quasi-coherent sheaves. That's the one category. This doesn't form a sheaf but derived categories come from infinity categories. The derived category is the homotopy category of an infinity category. And the relation that sends you to its derived infinity category is in fact a sheaf in this sense. This provides a language for talking about sheaves of infinity categories. In some sense this is interesting even if he is just a one topos. Of course if you have a one topos you can promote it to an infinity topos and then talk about sheaves of infinity categories and infinity topos. And this is already useful in classical settings. I want to talk about these kinds of derived categories and people are doing that. Infinity topos were really introduced by these authors for talking about derived geometry. So if you have A as some category of ring like objects which could be commutative DGA's or E infinity rings vector if you like homotopy theory. Then you obtain the notion of a ringed infinity topos. An infinity topos together with a sheaf with values in A. One of these generalized categories of rings. And this leads to a notion of derived geometry. To be honest most of the interest is in this category of generalized rings. But it's in some sense you need this notion of infinity topos in order to make sort of decent definitions. Differential co-amology. This turns out to be sort of an area where these have turned out to be useful. So differential co-amology these are invariants of smooth manifolds that combine things like singular co-amology say with the inter coefficients with the comology represented by differential forms. So what you can do is you can form an infinity topos. This is a gross infinity topos, a gross infinity topos of sheaves of infinity group voids on the large site of the infinity manifolds. Which is actually essentially small so it's fine. This contains objects like I'm going to be able to claim objects which represents actually the singular co-amology of manifolds. But it also represents contains objects like omega in which represents things like differential forms. And you can combine these to give things that represent differential co-amology. So this is a nice context where they're useful. Here's a page where I couldn't think of anything useful to say on one page so I won't. There's an interesting recent work by Baruch, Glassman and Hayne on a paper called Exodromy. I don't know how to pronounce that word Exodromy. Which uses stratified infinity topoi and theories of constructable sheaves on infinity topoi to do some kind of generalizations of classical Galois theory in the context of algebraic geometry. Finally, I should mention the logical aspects of infinity topoi and I didn't want to take much time on this because I don't understand this very much. So in one topoi, one topoi have an internal language. There's something for instance called the Michel Benabou language and that has an interpretation in a one topos. So there's a notion of type theory due to Martin Luff called dependent type theory which has an interesting aspect. It introduces types for identity instead of just having a sort of a relation, we have an identity type between any two terms of some other type. It also in many formulations introduces type families. So there's some sort of universal type. And it was noticed that these identity types behave like spaces of paths in a space which suggests a homotopy theoretic interpretation which was developed by everybody in Warren and Boyd Vladimir Voivotsky and which led to the notion of univalent type theory. Let me not try to describe what univalent type theory is because honestly that's a whole lecturing itself and one I'm not confident to give. But Voivotsky showed that infinity group oids that is some special sets form a model for his univalent type theory. In particular univalent type theory has this universal type and that corresponds to what I called the object classifier, the base of the universal morphism classifier. More generally, in fact it was early recognized that this was true and it's been proved by Shulman that in some sense every infinity topos is a model for univalent type theory. So you can say that univalent type theory is the internal language of an infinity topos in some sense. But you do have to be careful with that statement. There is an issue here. It's not really an internal language in the way that I've been talking about infinity topoi. Type theory including this univalent type theory has functions which can be composed and that composition is actually associative on the nose. It's just built into the way functions are described because they look like kind of look vaguely like functions of sets. So any model of such a type theory must be a one category. And that's the sort of model that these people have constructed. They've constructed models that for instance are a Quillen model category which is only a one category. But the Quillen model category structure is used in describing how the model works. Fibrations play an important role. And so this theorem really says that it's a Quillen model category. That can be chosen to be Quillen model categories whose corresponding infinity category, the infinity category you can extract from that one category is an infinity topos. So that's the sense in which every infinity topos is a model for a univalent type, for univalent type theory. It's via one of these Quillen model categories. And the interpretation is actually in the one category. I don't think anybody knows how to make this kind of linguistic or language or internal language that describes doing infinity category theory. This isn't really something I think anybody's really figured out to do in sort of a way that seems practical to me. So I think that's something that's an interesting subject if you like those things. All right, that's all I had to say. Thank you. Thank you so much for this very nice and very rich course.
In this series of lectures I will give an introduction to the concept of "infinity topoi", which is an analog of the notion of a "Grothendieck topos" which is not an ordinary category, but rather is an "infinity category". No prior knowledge of higher category theory will be assumed.
10.5446/55524 (DOI)
So Daniel, it's a pleasure. Thank you, Lohan. So I want to thank you and Olivia to be at the origin of all this project, to the proposal with artificial neural networks. I want to thank my fellow laboratory for the club. And also another source of my interest for the proposal coming after this work about information that I will speak a bit is with Anna Bertos, who is a neuroscientist, trying to find kind of internal geometry in the brain. And we propose that they are organized like a stack. So we hope there is also application to natural neural network, not only artificial. Other acknowledgment are written in this text. So this is a good reference. It was yesterday. And Jean-Claude gave us the good reference to this paper, which contains all what I will explain plus the data. So the first goal was to find a frame for a deep neural network, the DNM. And Jean-Claude has explained where. So I will skip some slides because they were already presented by Jean-Claude to focus on a different topic, which is in the same sense how we include also in this framework of Topos and Al, the possibility to combine moduli in network and to make that networks could speak with one another. So we saw that which is a category with the code and the topology by adding this four. So I don't remember the construction. I don't remind you the construction. But in fact, I want to add something. I will work mostly in another setting because what matters is the Topos. And as Olivier explained us, there was a lot of time in the compensing. In fact, we could choose different sites for that. And in this finite case of sites without endomorphism, in fact, we can sort out the topology. And it was marketed by Joel Friedman from years ago. And it depends on the Caramello-Theorenspe, which say necessary insufficient condition for a Topos to be a Topos of pressure of our category. And here we have such a pressure, a category of pressure for the post-site, in fact. And I don't want to explain in detail all this structure. But perhaps we look at this kind of example. A better one, perhaps this one. So it is a representation of the graph of the cell, which are in LSTM. And I come back at the end of the talk because in some sense, they play a very interesting role to have a semantic content. And we will try to understand why. Why, for example, not only for the extension of gradient, but they continue to be used now, 30 years after the invention in networks, each time, for example, you have to translate the text. So it's known that networks mainly detect statistical properties of the text. But in some sense, it says that also this kind of cell has some at least syntactic development, which is not cement. And so you see the ordinary graph to the left. And now I want the poset here on the right. And you see what I added. In some sense, this red half-bross and which makes that this point, which look like a factor, correspond to the thought. So in some sense, you have two kind of input. You have the input, which comes, for example, xt here, is data coming at time t. xt plus 1, data, new data coming at time t plus 1. You see the arrow in the reverse direction as information flow. Because we look at the concurrent sum torque and for many good reasons. But here, the finite setting is not so much low. You see that in some sense, this internal part are also sending information. And that's the lesson. Dynamically, they will be neutral in ordinary neural network. But in the future, we can expect that some kind of spontaneous activity can come from this point. Because they are similar to the input. And after that, you have output for the network 2. And as the output is here, for the ht, it's output of all the steps. It's a memory for the future. City also is part of the memory for the future. It starts here, and it starts and goes away. And so we work in sense with pressure. And the activity is represented by the pressure dynamic. And the weights are also represented by the pressure. And in some sense, they are dual, because the pressure takes the transfer information. And the pressure W forgets the weights by passing to the next layer, which correspond well to what happened in the back propagation learn. But we just have to remember that we have transformed the graph to go to a process. And we have a dynamical object on this process, in the other process of pressure. And one is the kind of the fibered object of those weights. For each system of weights, we have one dynamic. But this dynamic is not really as usual dynamic, because in some sense, at this point, the four we have are, where this kind of possibility for spontaneous activity that at this moment we do not use. In some sense, the dynamic is given by the section. So the section with the hypothesis we make are totally characterized by the data in input, the set in the layers in input, the data minus section. So what is the dynamic in this sense is the section of a pressure. And here in the equation one, you have the characterization of this pressure. Perhaps it's much too long to describe. And it was already described by Jean Claude. And this is a question two here. Only the question two, look at it. Say, how is made the full dynamic? So you have the state x index r. You have the weights. All the system of weights be coming in the network. And you apply only part of this white to the state. And at the same time, you forget this part of the weight. And you are fully interested by the weight we come after. And here it's the daytime. That's not very, very interesting. You could look at the paper. Because it's a theorem one, the back propagation can be represented here as a flow of natural transformation of the object of one. And of course, what is interesting in practice is more the lemma one, which makes the success of this back propagation algorithm. Because you have this explicit sigma pi formula to rise the gradient. And so it gives an access to the gradient. After that, it's another matter. Of course, to integrate the gradient because it's known that in most applications, you have many, many local factors. It's a non-conduct function. And here it's a schematic view of the functioning that we will take in mind. And with this analogy, in some sense, we want to push you that this is a kind of dynamical system with parameters, which are given by the weights. And it's not only learning, which is important. In fact, the learning is really dependent on the data and the question of the output. And so this influence of input data and output will make that the weights will change depending on the kind of input and output you want. And you have certainly a kind of catastrophic study to perform to understand how the dynamic depends to the weight. For example, in the experiment that John Claude presented, we saw something that really we didn't expect. That two layers, for example, in a simple problem, you have kind of story analysis. It's very spectacular. It's that you have an approximation by harmonic of the big speed of the shape of the formula. Because we were looking for formulas. And at some moment, you have a defecation. And you go to a new attractor, which makes logic. So in some sense, it says that this structure of defecation, and here is this representation of what I say about the dynamic. Which is a dynamic that is run by section, which has the first level of the homology. And here, sigma denotes these defecation sets, which has to be understood. Of course, at this moment, very few analytical results. For example, why do you shift from here to logic is certainly very hard from the analytical point of view. So the second role was try to explain why the Devon-N do better than an approximation of function and interpolation. It was a subject of John Claude, you see, too. Because in many interesting cases, you have restriction of the structure, on the functioning and the learning. And it comes from geometry or semantic, which is extracted or expected from the data. One of the problems that the network has to solve as output. And this is this internal structure that John Claude discussed, which is given now by not a pressure, in this sense, but from the construction of the problem of zero of the vibration. It is given by a contrabiant factor to some category. For example, the category of group. And this gives a stack. And the hypothesis is that the dynamical object and all the objects that we will present to this stack and become object of the purpose of the stack. And we go in this sense to the classifying purpose. And we join here the notion, which is now studied by Olivia, of the relative purpose. And then this is another name for this situation. But she looks very carefully now as logical interpretation in terms of, for example, first order or second order logic. And we started with that. So before, perhaps, I show you, because we mentioned several times, this kind of Fourier analysis made by the CNN. And here you have an idea of the architecture which I used today. And you see there are really no more shame of fully connected network. You have several parallel views which come together at some point. And you see in the first layer, where after that is very difficult to understand what happened, but in the first layer, you see this very nice web like kind of analysis or opponent analysis of color, which probably constructs a three dimensional space of basic color. And here, I present you because it will be another reason to look at this invariant structure. Is this way we use translation by this here you see the LSTM chain in the right and bottom. That is what is original. And this is the main term I will come at the end. Is this part that we recent use of networks, for example starting in 2015 and now continuing and exploding, is to use both language and image analysis with CNN to answer questions about sense. It's not only how to detect the cut, which could be done by a simple CNN, but to combine many things and to try to have a description. Now, people are able to have a dynamical description in time of what happened in movies, not only in image. So this is the task of the expression of this internal structure, which are very far from a fully connected network, which is used to prove the thermals approximation. And so this is what I see here. The NN that analyze the image today, for instance, in object detection, have several channels of convolution map, max pooling, fully connected maps that are joined together to take decision. It looks like the sense is for localizing the translation invariance as it happens in the successive visual area in the brain of animal. But you have, in some sense, many copies of translation into the image. This was discovered also by a neuroscientist, Ubel and Dizel, a long time ago, in the 50s. And experiment shows that the post-legal interporeblets are formed to translate from practical to actual. But after that, you develop, certainly, more and more invariance, that using not only architecture, but experiments. Oh, other leg ropes were used. And now I come to this presentation of the stack. It is given by the contraviant from Tor, from the process C of the NN to the category of categories, for example. And the vibration is reconstructed by this continental zero construction, which is described by the formula, how you construct the morphis in the category F, which are given by the contraviant from Tor. F. And now, it's the interest also of the Olivia and Jean-Claude Manseuil. So the pressure of the F, the elements of the classical interpore, are also described by local pressure in each fiber, which are related by natural transformation. And among this natural transformation, you have a very important one, which is the two adjoint of the F star. And this F alpha is the map from one category, for one layer in the category associated to the layer just before the dynamic point of view. And just the composition with the alpha gives you from the top of the fiber to the top of the fiber and more from Tor. And this from Tor has two adjoint, one left adjoint, one right adjoint. And from the left adjoint with the explanation, in this case, you have a map which goes from the classifying object in what is here, the fiber before to the next fiber more closer to the output coming back. And this define an object. And this object of the classifying purpose with some kind of integration, which is written as a derivation by the local logic in each fiber. That's a very important point for all of you to explain. This is not sufficient in general to transport the theory. Because in sense, the moral of the semantic function that we expect, because here in sense, we are describing the structure, which we expect that the dynamic can learn. But it is some kind of constraint we give. For example, in CNN, we impose that the maps from states to other states are given by convolutional matrix states. But with non-linearity, that this is in sense a hope that the real network understands it. But this hope cannot be satisfied if, at the theoretical level, you cannot transfer the question from the output inside the network and the theory coming by the network working in the layers to the output. That is the term that we help to answer. So in this case, in some sense, the two things which are important, probably not necessary, but insufficient, is that the two morphisms, which are the pullback and its height, which is more difficult to describe, because it's already some kind of the homology, is both geometric and open. And for that, we could follow Olivia or some McClendon and Merdy to extract this necessary, I don't know, sufficient condition to have a good function, theoretical function. So following Giraud in this paper, we see that the case, in the same sense, is the Eiffel Fah, are themselves fable. So in some sense, the good stack are not only fable, over the side, the c, because we forget the topology because we look at the question. And also, internally, you must have some vibration. And what it says is something that people suit in statistical terms to what is named the bottleneck by hypothesis, is that it's in some sense, you lose progressively information from the input data to be closer and closer to the simple question which I asked you about. So in the sense that this map says that you are in a kind of operation for forgetting the data to construct something more significant. And the lemma which express that is that, say, when you have this internal configuration, in the same sense, you have an adjunction as a level of logic, which is given by the left adjoint that we looked before, which is really a left adjoint of what is coming from the height adjoint of the control alpha. You can think really geometrically to that as a kind of hypothesis that this vibration is itself layer from layer of vibration. Now you come to language. So you have several languages. It's true as Jean-Claude told at some moment that we are using now language which are more close to linear logic than to traditional logic. And this is developed in our text. But now we could also just restrict to the simplest case that the language is given by the object in the topos. And we are not looking exactly as a classifier for the path, but as a classifier of sub-objects of a given object. Practically, this is sufficient. In all applications, we know, for example, we have bar in the tree, and they have color, and they have length. And it's easy to construct an object of the topos where it comes from the stack, which express very well all of that. And what is our semantic function? So I connect now only to the dynamic. Is the fact that the activity of the layers express axioms of theories which are interesting to conclude at the end. So T out of C in is the name for the theory which is expected when you are the input C in. And so now, so this map in a sense, people external to the networks are able, it will take a lot of time. That's what they look only at samples. But theoretically, human people today are able to understand the effect time. So it gives this kind of theoretical correspondence between input and theory. But the network system has to construct the discussion. In fact, the discussion are just given by activity of some neurons of the layer. That's what DU is for discrete. Not all the neurons are discretized. But here we look at the discretized. And this discretized neurons give the part of some object which are seen different axioms for theory. And this axiom much transport to the good theory. This is a good semantic function. You could have bad semantic function. However, we relate the dynamic to the theory. And now we can vary the stack of C because given architecture, depending on kind of problem, could use different category to construct the stack. And this was looked by Jiro when, in some sense, it is the category of stack over C, when the two categories of stack over C, when M is the category of category. But here, as you will see, it will be very interesting to generalize that. Because category of category is what is named closed-model category. From what generalize sense. But many categories will try to use it. In fact, purpose, the most interesting at this moment, is the category of groupoid. But you can sort out this category, perhaps one word about it, made by a vibration in groupoid of a process, which is also closed-model category. For certain, closed-model category too. In the sense of quillen, so with good properties. But what are these categories? Model categories, because they will play the whole now, for the best of the told. So not everybody, I suppose, know what one thing. Ah, OK. They were invented by quillen, and they were a large short inspiration for Gotendic in the worship of the stacks, and his theory of the battles. Which turns the notion of closed-model in another term. So Tomasso and extended with Gotendic, and CZSB and Melsignotes, and spread the question of Gotendic about this object. And Marie gets the construction I needed. We need them here. And he says that closed-model category is a category with special mortals, which look like vibration, other compression, and other as a homotopy, a quillen state. And the goal is to do homotopy in the framework of category, in particular for defining what is the homotopy category, that's the morphism of to homotopy with some special object. And we need not all the objects. And why the motivation of quillen was to extend the construction coming from topology to algebra? Like it was happening at the beginning, when OX, for example, understood that the homology introduced before for topology was very useful also in group theory. And after that, it was done for algebra, for the algebra. And in fact, all that concerned homology. And the next step was homotopy. And it's now arriving. And it was still arriving a bit before. And so what says the first paragraph here is that when I have such kind of category with homotopy, if I take the category of compravion from top from C to M, then I get a new model category. And very interestingly, this generates a very wide tip theory. And it's able, for example, if you use for M, the category of simplicial set, to generate what is named homotopy theory of type, or voids voids for the univalent action. So this kind of tip theory constitute some extension of the narrative theory, including many kind of set theory. In some sense, the kind of revision of the basis of mathematics. And art and capulting has shown that precisely this category in C, E, F, for example, the category of group unity, define such general tip theory, which was introduced by Martin Leu. And it is a wide extension of the kind of long range we used before. But it is that when you vary now, the start. So not art. So and the main reason, technically, for what we are doing, which module your networks, is this fact that in this special case of DNA, is not true for process, then you can determine easily what are the fibrations. And in particular, what is the fibrant object. That is the object such as the map to the point, the category, which is the final category, which always exists in this setting. This fibrant object are descriptive. And we were very stuck with this, because they correspond exactly to the condition to make that the theory are propagated in the logic of language of all this stuff. And so perhaps it's too technical to describe that. We recover exactly this condition. That in fact, we not only have a vibration, but vibration made by vibration. And it includes the course of four, where you see you have a projection from a product. And this projection from a product is exactly what you need to have a fibrant object. So what is it? What is important to look at this fibrant object? It's because in this case, every object is also a core fibrant. That is the map from the initial category, which is a core fibrant. And this is important because the homotopy category has exactly this object as an object. This is a fibrant and a core fibrant. It is a fibrant. And so in this instance, we connect with homotopy something which is interesting from the semantic point of view. So this will be the next step. Now to use information, we try to define semantic information. So I skipped about this cat's manifold. And we look now at the language of theories, how they propagate. And we consider the type of theories. And in some sense, we can localize further to get this cat's manifold. And we can look at the subset of theory which excludes some proposition. The theory excludes some proposition. And that we observe experimentally that they play a role, a special role in the logical activity of the cell. And this is the kind of localization. And now we construct from this language, we construct a vibration of the vibration. And we can extract the kind of more precise category here. So the theory behave well over this language. Now, this is important formula here, as in the definition. And we suppose that we look at the proposition which are implied in the ninth sense by the proposition in this elementary logic. All is local. You have a new layer, and you are in a certain context. And you look at the theory, which is expressed in this language. And you have this interesting operation, which is given by internal implication. And the idea is that this internal implication will play the role that conditioning play in the Bayesian term. And so perhaps I can't do this semantic information. So this is the work on topos that Laurent reminded with Pierre Baudot. We have shown that the Schamann entropy is a universal from logic class for a special module in a ring of purpose. This is a function of the probabilities. And this ring of purpose is made by the set of quantum variable with the forgetting maps. This is the maps of the category for this process are just forgetting. In some sense, there is two analogies in the network. First, you are from layer to layer, kind of forgetting. But more than that, when you look at the proposition, that's an internal semantic of this fact, you can say that this proposition itself has a variable which could have value, for example, if you introduce an additional function for measuring the value of the proposition. And here, you see that you have this set of quantum variable. So here, the ring structure is the joint of variable. In our case, it is the hand of the transfer product of the model in that category. And the action is given by the mean of conditioning. The conditioning means action. And when you look at that, in some sense, that topological object, you extract the information quantity from the topological variant of this object. So it was natural in this context to decide it is very top down now that information can must be the invariant of what we just described. Because we describe the categories. There is a propagation of language. And all of them, you have the theory. And the analogy is that theories are analog or probability. And the conditioning, which is given by the internal implication, is the analog of the probabilistic conditioning. In fact, we have looked in this paper that in another paper, which we put on archive, makes this analogy precise. And now we compute the invariant. And so, as we're doing 0, all is OK. In some sense, we describe the theoretical cut of the development of cement. But what is deceptive is that there is no higher homology in this case. And this is mainly due to the fact that you have always a kind of equivalent of fact in this theory. And by using this pulse or the equivalent, it is, for example, p, when you localize it, it's very easy to kill all the possible site. In fact, it is something which happens also in quantum theory. You can apply the same principle to quantum information. And you compute the homology as before. And you see that the entropy, which is also present in quantum, because quantum always will also have the ordinary probability load. And the fundamental entropy, in a sense, is the analog of this function C. That it makes that the entropy becomes a problem. But here it works, because, in fact, we don't have just a question for hash. So you have a very large ambiguity about what could be this fundamental question. So this function C is now the entire entropy. So we will try to make it more precise by using something like some code spoke at the end, the kind of Galois theory, which is given by the structure of the fiber F2, for example, in the group of M2 to make that this function, which has some factorial property, is better characterized. In some sense, we recover when you take numerical function in the Boolean case, we just recover the ambiguity that was already described by Tarnab and Barillel in 1952, where they constructed that, in some sense, we have an arbitrary measure, for defining the information, except some element of independence or symmetry. So perhaps things will be better when you go to homotopy. And so here we can describe an object which replaced the Bar complex homotopically from the set of theory and two very interesting kind of gluing in the theory. Since the gluing, there are, in some sense, adjoint gluing. You see, here I look at what is named homogenous crochet in the case of the Bar complex, which can be described without anything abelian. That's only the fact that if you make the action, which is given by conditioning on the theory, it must be the same as if you multiply the proposition that you want to take an attribute. So you have two kind of elements, the propositional calculus, essentially coming from the output, which makes the gamma. And the theory which propagates in the network to answer the question of the output. But you have another one, which, in some sense, is trying to translate the dynamic, that's the kind of propagation you get of the theory along the theoretical network. And if you can define not only the equivalence of that, but homotopic equivalence. If you take cylinders, you just look that you make an homotopic question, which is not ordinary question. And you get something like a natural space of, in fact, it is a b, it is a simplicial space, that is a b, b simplicial, which looks like a good candidate to represent homotopically the theory. And now the semantic function, you can do the same, not for the proposition, but for the propagation with the network itself, with the dynamical object. And now the semantic function becomes the same piece, conjectural simplicial map, because it can be very poor. The objective is make that this map is rich. OK. So it defines, in some sense, a kind of homotopic type of the description by the network of the problem, which is to solve. And I look at the arrow. So I must stop very soon. And here, at this level, we could look at maps from theory to theory of space, for example, in a model. And it's easy to reproduce totally non-Abeleian framework. The main relation, for example, of the information theory. For example, here, you have this relation, the Shannon equation, h, p, p, p. And now it means that when you subtract the entropy from some theories, then you get the condition. And so one example we worked out was L23, this theory, that Jean-Claude is trying. So perhaps I skip that, because you already spoke about this kind of general theory, just to attain that the end. Now we will vary the set. So we are very, at this moment, the stack, the language. But now we want also to vary the network, the architecture. And when we do that, we come exactly with what we're going to do, especially when we enlarge the group point category to any closed model category. So we come to this two-factor, two categories, a small category, two larger categories. And the principal example is a, that is, do dm of the small category y are all the pressure, revaluing m over b. So exactly the one which I encounter with Jiro for the relative purpose of the media. And this generalize what was done before for the derivative category in the rebellion case. And the hope of importantly, which is Mac-Contret, the theorem of the minorities and so on, is that this constitute not only the generalization of purpose, you see, it's exactly the formula for the purpose of the patient, but it generalize all the homological algebra. And what is important is really to consider the functionality in the small category. And as we have shown, that's the information space that belongs to such a structure. And to consider only a vibrant object for doing that, in this instance, we are working on the homotopic category of this derivative. And so, I come back to finish to Concret structure because there's some kind of module in networks where happening, very structure. This one, for example, it is named the attention module, or more generally, a transformer. You see, it's given by this very interesting formula to take the nonlinear map of some quadratic operation. X and Y are the input vectors. And the W, W, K are the physical components of the quakes. And you take something which has fundamentally kind of the gates free, but which has also this form, which is very important, to first apply a nonlinearity to the quadratic part. And all that, all the cells are now a function for all semantic function in no code. And what we explain in our paper is that, in the same sense, this degree 3 is recovered as a level of neurons. And you can understand this function as subnetworks where the internal structure of the fibers are an arc group, which is beyond an underlying deformation of the grid people in the map. And it was the same for the nsdm cell or approximation of them, which are named GRO. They have exactly the same form. And this is for hd. And this form, you see, it is the plus. So this is only quadratic. The first ct is quadratic. That hd is only the 3. And all the formation which works are this form. And so this is the categories that John Claude show us. So it's in some sense a punitive to understand this structure of this model of modulus. And the way it contributes to the global function. And so this is the term which says that this map is not particularly stable, but on each coordinate team. That is, in some sense, the weights contribute sufficiently many parameters to make that individually the neurons become stable. And this connects with two things. This stability connects with the theory of epic encounter in this context, which was the unfolding theory of Koneko. Because it itself is connected with the story of the stability of incoming to linguistic. But what is more impressive is that independently through linguists, which is internally, developed such kind of structure to understand the manipulation of notions in a language. And here I say the two references, the book of Koneko. And of course, we are not working with true language. As recently as Gromov and Alain, there is no theory, no mathematical theory of language. In fact, even more, it was said by Gershstein and Osteen, that we cannot coerent the language. Because the language cannot be embedded in language. And it's endless. That's the possibility of language that's endless. But we encounter them. And we can't do the mathematical thing for some kind of artificial language, which are not ordinary languages. So sorry, I'll stop here. OK, so thank you so much, Daniel.
Every known artificial deep neural network (DNN) corresponds to an object in a canonical Grothendieck’s topos; its learning dynamic corresponds to a flow of morphisms in this topos. Invariance structures in the layers (like CNNs or LSTMs) correspond to Giraud’s stacks. This invariance is supposed to be responsible of the generalization property, that is extrapolation from learning data under constraints. The fibers represent pre-semantic categories (Culioli, Thom), over which artificial languages are defined, with internal logics, intuitionist, classical or linear (Girard). Semantic functioning of a network is its ability to express theories in such a language for answering questions in output about input data. Quantities and spaces of semantic information are defined by analogy with the homological interpretation of Shannon’s entropy (P.Baudot and D.B.). They generalize the measures found by Carnap and Bar-Hillel (1952). Amazingly, the above semantical structures are classified by geometric fibrant objects in a closed model category of Quillen, then they give rise to homotopical invariants of DNNs and of their semantic functioning. Intentional type theories (Martin-Löf) organize these objects and fibrations between them. Information contents and exchanges are analyzed by Grothendieck’s derivators. Joint work with Jean-Claude Belfiore
10.5446/55529 (DOI)
I am happy to introduce Jason Parker from Bandon University who is going to talk on Covariant Isotropy of Groten-Dick Toposis. So please go ahead. Great. So thank you for the opportunity. I'm Jason Parker and just FYI, I do sometimes stutter or stammer. So you hear me stuttering or stammering. That's totally normal. So yeah, so I'm talking about Covariant Isotropy of Groten-Dick Toposis. So just to start with some kind of background. Whoops, I have to get that going. Okay. Yeah, so basically, Covariant Isotropy is a somewhat recent categorical construction that basically gives an abstract notion of conjugation or inner automorphism for an arbitrary category. Oh, I think I have a... Okay. So basically, in prior work with Peter Krofstra and Phil Scott, we used techniques from categorical logic to characterize the Covariant Isotropy group of any locally presentable category C, and in particular of any pre-sheaf category. So in this talk, we'll first give an overview of Covariant Isotropy and then show that its characterization for any pre-sheaf category basically extends to any Groten-Dick Topos. And this is based on my recent pre-preprint with the same title as this talk that's currently on the archive. Okay, so just to first motivate this topic. So basically, George Bergman proved in a certain paper from I think 2012 that the inner automorphisms of groups can be characterized purely categorically as the group automorphisms that extend naturally along any group morphism out of the domain. So see this, first observe that if I have say an inner automorphism alpha of some group G induced by some element S in G, then for any group morphism f from G to some further group H, we can push forward alpha along f to define a further inner automorphism alpha sub f of the codomain H simply by conjugation with f of S and H. So therefore in particular, if I push alpha along the identity on G, we just have alpha. Now this family of automorphisms that I obtained from the starting inner automorphism alpha is coherent in the sense that if I have any group of homomorphisms f from G to G prime and then f prime from G prime to G prime prime, then the following square will always commute. So we have alpha sub f on the top, alpha sub f prime after f on the bottom and f prime on either side. So basically, if I push forward these are the inner automorphism alpha along f and then along f prime after f, then those kind of extended or those pushed forward inner automorphisms will be coherent in this sense. Okay, so for group G, that's called an arbitrary family of automorphisms with above naturality property, an extended inner automorphism of G. So concretely, this is a family of group automorphisms, one for each group morphism out of G on the co-domain of that morphism. These are all coherent with each other. Now I don't assume a priori that these are induced by a starting inner automorphism of G. So this is just an arbitrary family of automorphisms on the co-domains of all the group morphisms out of G that are coherent in the sense on the previous slide. So this is just a natural automorphism of the projection functor from the group G slice group to group. So Bourbon proved that if I have any automorphism alpha of a group G, then alpha is an inner automorphism. So it's induced by conjugation with some element. If and only if there is an extended inner automorphism in this above sense, that induces alpha. So who is component at the identity on G is just alpha itself. So we just saw that if alpha is inner, then it induces any such extended inner automorphism. But the converse is also true that if I give you any arbitrary extended inner automorphism of a group G, then it must come from an inner automorphism in the conjugation theoretic sense of the starting group G. So basically this gives a completely categorical characterization of inner automorphisms of groups. They're exactly those group automorphisms that can be coherently or functorially extended along group homomorphisms out of the domain. Okay, so how does this relate to or kind of inspire covariant acidity? So we have a functor Z from a group to group that sends any group G to its group of extended inner automorphisms. And these do form a group. So we refer to Z as the covariant isotropy group functor, the category group. And actually Bergman's theorem shows that this is in fact isomorphic to the identity functor on the category group. So in fact, any arbitrary category C has its own covariant isotropy group functor ZC that sends each object C to its own group of extended inner automorphisms. So these are families of automorphisms, one automorphism on the codomain of each arrow out of the object that are coherent with each other in the sense as described on a prior slide. So these are just natural automorphisms of the projection functor from the slice category under the object to the category. So basically, again, these are just families of automorphisms, one automorphism on on the codomain of each arrow out of the object that are natural or coherent. Okay. So we can turn Bergman's characterization of inner automorphisms in group into a definition of the notion of inner automorphism in an arbitrary category C. So if we have an object C and an automorphism alpha of C, then we can define alpha to be inner, just if it's induced by an extended inner automorphism. So if there is an extended inner automorphism, whose component at the identity of the object is the same as the other, so basically this is saying that alpha is inner, just if it can be coherently extended along any arrow out of the object. So group is the category of models of an algebraic or equational theory, so just a set of categorical logic. So in prior work, we basically generalized ideas from the proof of this result of Bergman to give a logical characterization of the inner and extended inner automorphisms in this more categorical sense of the category of models of any finite area quasi-equational theory. So yes, okay. So just I'll just get some background on those quickly. So a finite area quasi-equational theory T in the sense of the paper by Palmgren and Vickers called I think partial horn logic and Cartesian categories. So it's over a multi-sorted funnitary equational signature, so it has just sorts and operation symbols. So it's a set of implications called the axioms, which are between funnitary horn formulas, which are just conjunctions of equations between the terms. Now the kind of relevant feature of these theories, and these are also known as say Cartesian theories or essentially algebraic theories, is that the operation symbols may only be partially defined. So we have a term T, we'll write T down arrow as an abbreviation for T equals T, meaning that T is defined. So this is not always a theorem of the given underlying logic. So now if we have any regular cardinal lambda, then one can define an extension of these theories to lambda. So we can define say a lambda area quasi-equational theory T in the same way, but now allowing for operations with possibly lambda arity and then also lambda area conjunctions. Okay, so just to give some examples of these, so any algebraic theory is a funnitary quasi-equational theory, as are the theories of categories, group points, stripped, minoidal categories, and any pre-sheaf category. Now for this talk, so if we have a small site, then the Rotendick Topos on that site is in fact the category of models for a lambda area quasi-equational theory, where lambda is the smallest regular cardinal larger than any of the SIDS. So the sorts are the objects of C, for any arrow F, I have a unary operation symbol going in the opposite direction, and for any covering SIV, J, we have a lambda area operations symbol, sigma J of that typing. And then you have axioms expressing the contravariant functoriality and the fact that any matching family has a unique amalgamation. So therefore the models of this theory are exactly the sheaves on the site. Okay, so now I will just kind of briefly give an overview of the main results for covariant isotropy of Cartesian or quasi-equational theories. So it's fixed a lambda area quasi-equational theory over a lambda area signature, and we have its category of models, T-mod. So again, I'll just review the characterization of its covariant isotropy group, which was achieved for funnitary theories in prior work, and extended to lambda area in my recent preprint. So basically, using the quasi-equational syntax of T, one can define a notion of definable automorphism for a model M of T, and these then form a group, a def in of M. So if T is a single sorted for simplicity, then given any model M, one can form the T model, M bracket X obtained from M by freely adjoining an indeterminate element X. So the elements of this model can be seen as congruent classes of terms involving X regarded as a new constant, and also constants from the model M. And here two terms are congruent if they are provably equal in the diagram theory of the model, extended by this axiom saying that this new constant is defined. So the diagram theory of M extends T by basically adding constants from the model M, and axioms expressing the relations that hold in M. Okay, so if we have some element bracket T in M bracket X, we say it's substitutionally invertible if there are some possibly other elements, bracket S in this model, that is basically the substitutional inverse of T with respect to this diagram theory plus X. So we substitute S for X in T, we get just X in the diagram theory and conversely. If we have say an operation symbol of sigma, then a given element bracket T and M bracket X commutes generically with F, if T of M with X1 through Xn proves this sequence. So first of all, this model is just the diagram theory, extended by N new pairwise distinct constants, which are all defined. And then this sequence basically is expressing that if F applied to this n-tuple of new constants is defined, then if I substitute it for X in T, that is equal to F applied to these sort of n variants of T. So basically this is supposed to express that T is commuting or that T commutes in a sort of generic sense with this operation symbol F. And likewise, this congruence class bracket T will reflect the defineness of F if this same theory proves this sequence, which basically says that if T with F of this tuple for X is defined, then so is F of this tuple. So this is again expressing that this congruence class bracket T reflects defineness of F in this generic sense. Okay, so we can then define a group, so def N of M to be the group of all of these elements bracket T in M bracket X that are substitutionally invertible and commute generically with and reflect defineness of every operation symbol of the signature sigma. So basically this is the group of definable automorphisms of M, where we regard an element bracket T in this in this T model as inducing a definable automorphism if it satisfies these properties. Okay, and if T is multi-sorted, one can extend the above the definitions appropriately. So basically you just have to of course account for the possibly many sorts, which is not too hard to do. And that is shown in certain of the references. Okay, so then we proved in prior work that if we have a lambda-ary quasi-equational theory, then for any model M of this theory, its covariant isotropy group, so its group of extended inner automorphisms is actually isomorphic to its group of definable inner automorphisms. So and then in prior work, namely in this initial paper isotropy of algebraic theories, we use this result to show that the sort of inner automorphisms categorically defined in many categories of algebraic structures like monoids, groups, an abelian groups, non-commutative ring, etc., are exactly the sort of conjugation theoretic inner automorphisms that you would expect. So in this sense, covariant isotropy does actually provide kind of a generalized notion of inner automorphism for a category. Checking the time. Okay, so for pre-sheaf categories, so in a recent paper, we also characterized the covariant isotropy group of any pre-sheaf category for, of course, a small category C. So we have a pre-sheaf F and we show that the group of definable automorphisms of F consists up to isomorphism of exactly the natural automorphisms alpha of F induced by some element psi in the automorphism group of the identity functor on C. So here psi is a natural automorphism of the identity functor on C. And to say that alpha is induced by psi means that the component of alpha at any object of C is just the functor F applied to the component of psi at C. So basically, the, the, the, this shows that the only inner automorphisms of pre-sheaves are those induced by natural automorphism of the identity functor on the base category. So then follows that, that the covariant isotropy group functor of any pre-sheaf category is actually a constant on the automorphism group of the identity functor of the, the index category. Okay, so now in my recent preprint, I wanted to, to see if this would extend to arbitrary group and deatopuses or sheaf categories. So to start, for certain convenience, to technical reasons, I first just, just examined sub canonical sites where no object is covered by the empty sieve. So basically, for any object C, the empty sieve is not a covering sieve of C. Okay, so then if F is any sheaf over such a site, I show that its group of definable inner automorphisms consists up to isomorphism of exactly those natural automorphisms alpha of F induced by some natural automorphism psi of the identity functor on C in the sense I described on the prior slide. So now this proof is quite technical and it's the most non-trivial aspect of the overall results in that preprint. So, so although this, this results is the same as for pre-sheafs, so far at least, its proof is still not obvious or, or non-trivial. So yeah. Okay. Yeah, so, so as for pre-sheaf categories, if C.J. is any small sub canonical site in which no object is covered by the empty sieve, then the covariant isotope group of that sheaf category is constant on the automorphism group of the identity functor of the base category. So now we want to hopefully remove the assumptions of sub-kennicity and no object of being covered by the empty sieve. So the, the, the second property is easier to dispense with. So we have any small sub-kennonical site which, which doesn't necessarily have this property. One can find pretty easily another sub-kennonical site which, which does have this property. So, in which no object is covered by the empty sieve and where the, the sheaf categories are equivalent and the automorphism groups of the respective identity functors are isomorphic. So therefore, if C.J. is any small sub-kennonical site where objects may or may not be covered by the empty sieve, then the covariant isotope group of the sheaf category is still constant on the automorphism group of the identity functor on the base category. So now we want to consider any small site not necessarily sub-kennonical. So first, if we have any locally small category E with a small full dense sub-category C, then I just proved this lemma saying that the automorphism groups of the identity functors on those two categories are isomorphic. So, and I use that to prove the following results. Well, actually, okay. So if we have any small site C.J., not necessarily sub-kennonical, then it's, it's, it's well known that there is a sub-kennonical topology K on the small full dense sub-category AYC. So that's just the, the sub-category, the full sub-category of the sheafifications of the representables for which the original topos is equivalent to the topos of sheaves on this site. So then the covariant isotope group of the original sheaf topos is, is constant on the automorphism group of the identity functor on this full sub-category AYC. And by the lemma in the first point, that's in turn isomorphic to the automorphism group of the identity functor on the actual topos. So basically, if we have any small site, not necessarily sub-kennonical or, and where objects could be covered by the empty sieves, then the covariant isotope group of this topos is constant on the, on a morphism group of the identity functor of the topos, which may also be called the, the center of the topos. Okay, so in particular, since of course, any pre-sheaf category is known to be the, the category of sheaves for the trivial and sub-kennonical topology T on C, where only maximal sieves cover, we recover our early result for pre-sheaf toposes. And now if the site is not sub-kennonical, there is in general no, no relation between the automorphism groups of idC and the identity functor of the topos. So for example, if the automorphism group of idC is non-trivial, say for example, if C is the one object category corresponding to an abelian group, which is non-trivial, then, and J is the topology where every sieve covers, then it's easy to show that J is not sub-kennonical, and that the only sheaf is the terminal pre-sheaf, so the sheaf category is trivial, so that's the, the automorphism group of the, of the topos is trivial, even though the automorphism group of idC is non-trivial by assumption. And yeah, so our results show a major difference between covariant and contravariant isotropy of, isotropy of, of, of Groenlie toposes. So the latter was studied in this paper called isotropy and crosstopuses by Fung, Kostra, and Steinberg, and they show that, so contravariant isotropy is always representable by a sheaf of groups, while we have now shown that a covariant isotropy is always constant, and in fact, on the group of global sections of this sheaf of groups. Okay, so to start wrapping up, so via George Bergman's purely categorical characterization of the inner automorphisms of groups, covariant isotropy can be seen as giving a notion of conjugation or inner automorphism for arbitrary categories. So basically, you can define an automorphism in an arbitrary category to be inner, if there is an extended inner automorphism of the object that induces that initial automorphism, or in other words, the automorphism is inner if it can be coherently extended along any map out of the object, because this is the result that, that holds for, or that characterizes the inner automorphisms of groups defined in terms of conjugation. So we've characterized the covariant isotropy group Fungter of T-Mod for any lambda airy-quasi-equational theory T. So the covariant isotropy group of any model M, or its group of extended inner automorphisms, is isomorphic to the group of definable automorphisms of M. And this, I think that this characterization is most useful because it can provide a more kind of concrete or perhaps computational way to characterize the extended inner automorphisms of a given model, because those, I mean, the definition of extended inner automorphism does not kind of in itself allow for any obvious way to characterize them. Whereas if you work with the syntax of a theory for which this category is the category of models, then via this isomorphism that kind of provides a more perhaps down-to-earth computational concrete way to hopefully characterize the covariant isotropy of the category. Yeah, so using this results, we've shown that the characterization of covariant isotropy for pre-sheaf topuses that we achieved earlier, in fact essentially extends to all Rotendieke to topuses. So for any small and subcanonical site, C.J., the covariant isotropy group of the Rotendieke topos is constant on the identity functor on C. And as we saw before, this basically amounts to saying that the only inner automorphisms of a sheaf on any such site are the ones induced by a natural automorphism of itsy. So although this result shows that covariant isotropy as opposed to contravariant for Rotendieke topuses is in some sense degenerates or at least constant, the proof of this is still non-trivial. And we also intend to build on this result to characterize the covariant isotropy of categories of sheaves of algebraic structures, not just sets, which will be non-constant in general. Since in my thesis, I showed that the covariant isotropy of categories of pre-sheaves of algebraic structures is non-constant in general. Okay, so that concludes my talk. Thanks very much. Oh, and have some references here as well. Thank you very much for your talk.
Covariant isotropy can be regarded as providing an abstract notion of conjugation or inner automorphism for an arbitrary category. After reviewing the characterization of covariant isotropy in locally presentable categories (regarded as categories of models of essentially algebraic theories), we characterize the covariant isotropy of any Grothendieck topos, showing that it essentially agrees with the previously obtained characterization of covariant isotropy for presheaf toposes.
10.5446/55532 (DOI)
Thank you very much to the organizers for allowing me to speak of this presentation. The story I want to share with you today is about how we can think about topos as ways of completing structure. So we might start off with some structure which doesn't have all of the properties we want it to have. And topos is going to give us a way of talking about what structure there should be there and what structure we can add. And so in particular, as the title says, we'll be looking at how we can add structure to some certain logics coming from like Halicmorphisms. So the idea behind the presentation I'm going to show you is that we want to utilize our intuition coming from a propositional setting. So in a propositional setting we have localic topos classifying propositional theories, but we also have localic topos obviously coming from locales. And so what we want to be able to do is extend this to where we talk in a relative topos theoretic sense about having a localic morphism over a topos and that corresponding to an internal locale. And then on the theory side we want to be able to talk about what sort of geometric theory we might get out of that. So that's the overall picture. I'm going to give some idea of what's going on with these branches of the bridge and then I'll give a specific case which ties in very nicely with the logic. So yes, for people who haven't seen the definition before a localic geometric morphism is a geometric morphism from a topos F to a topos a such that every object in our domain topos is a sub quotient of something in the inverse image of a. So some examples of localic topos is every inclusion is localic. That should immediately tell us that every topos comes attached with a comes attached with at least one. And we can also get a localic morphism to another one because every topos is an inclusion has an inclusion into a pre chief topos. And we can also get lots of examples of localic morphisms coming from locales. So when we take sheaves on a locale. A row between two localic topos is also going to be a localic geometric morphism. And we can do the same with internal locales and do the exact same construction relatively. So here we take the locales internal to a topos and we take sheaves internal sheaves of these locales. And this gives us an embedding of the internal locales into gotten date topos is that has precisely the same properties that you'd expect from the normal like Alex setting. And every arrow between these localic topos is going to be localic as well, although there's a little caveat to that, because obviously when we take the business embedding into here. There comes a choice of the morphism to a. And so it will be localic up to that up to that choice of arrow. And so, and so we also have by a result from, I believe, Joao and turning that every localic geometric morphism is going to be the domain topos is going to be of the form given here that F is going to be the internal sheaves on the internal locale given by the direct image of a sub object classifier. So this result gives us a very nice description of what the internal locale should be corresponding to a localic geometric morphism. And so this is an example of the calic geometric morphism so what what is an internal locale. So the application I'm going to give at the end we're only going to be concerned with internal locale's pre-sheaf topos is. So an internal locale of the pre-sheaf topos where C has finite limits is going to be a functor from C into the category of locale's such that all of our left of joint satisfying Frobenius reciprocity this condition is Frobenius reciprocity, and also we have the Beck-Chevrolet condition holding. This, these should be F's down here, I forgot to change that. So now we can go to the internal locale's and so now we can appreciate the arches of the bridge here that the the localic morphism is going to go to the internal locale given by the direct image of a sub object classifier and similarly internal locale's are going to induce the morphisms by the projection. So we'll still see lots of localeic properties cropping up, and that is in the idea of a locale expansion of a theory. So a locale expansion is where we take a theory, we take its signature, and we add new relation and function symbols to that signature. We take a theory that proves all of the axioms of our previous theory. And now it's a theorem due to Caramella in theory sites and topos that every localeic expansion produces a localic morphism between the classifying topos. And the observation that every theory in a signature sigma is going to be a localeic expansion of the theory of objects that signature. This is the empty theory in the signature containing just the sorts and the signature of our theory. So no function symbols, no relation symbols just quality. So we're going to have that every classifying topos is going to be localeic over top of the classifying topos of the theory of objects for its signature. So, the theory of objects is we're going to build a explicit site for it and it's going to involve the category of relabeling sorts, which I've denoted as sort subscript sigma. And that category has as objects for finite strings of variables in the, in the sorts, and then arrows between these finite strings of variables are going to be relabelings of variables. So that means they are mapped but respect sorts. Now obviously if sigma is single sorted, then a finite string of variables is going to correspond to a finite set and any relabeling, because every map is going to respect sorts is just going to be every map. And so we have the category of relabeling for a single sorted theory is given by a fin sets. And so then we can observe since the classifying topos of the theory of objects in the signature sigma is going to be isomorphic to this pre-sheaf topos, we're going to have that every classifying topos is the topos of sheaves on an internal locale of this pre-sheaf topos. So that's that's the sort of motivation for what we're going to do next we've seen that whenever we have something that's localic over another topos we're going to get an internal locale. And here we're going to see that we actually get quite a nice description for the internal locale corresponding to a classifying topos of a theory T. Now obviously since every gotten to topos is a classifying topos of some theory, we're going to have that we're able to perform this construction for every topos. So let's just review our bridge in this restricted context. We have as the unifying notion topos is being localic over this pre-sheaf topos. And this is going to correspond to two notions on the one hand the geometric notion of internal locale this pre-sheaf topos. And on the other hand, the notion of geometric theories in a signature sigma. So that's, that's, we already appreciate what's going on with this bridge but we need to sort of understand this bridge as well and how they interact. So yeah, in order to do that I'm just going to quickly recap some of the stuff from LeForg's lecture earlier. It's not last week, you know it is last week it's not this week. So, at the end of last week. I'm going to recall some of the notions from syntactic sites for a geometric theory. So, for a geometric theory T, we can take the syntactic category which has its objects, but formulae in that context, and the arrows between two formulae and context, again to be provable equivalence classes of formulae theta, such that, and then we have these sequence holding, and as LeForg explained just express the fact that theta is the graph of a function. So that's the usual definition of a syntactic category and its syntactic topology has this, as I has this description but to Sid is covering, if and only if we have this dis dis junction of X is the definition of existential statements being satisfied. And then it's it's very well known that the classifying top of theory T is going to be the sheaves on this site with this topology. And it's also going to be important later. But we'll see we'll see how we can reconstruct it but the topology is sub canonical. So it's not going to be the syntactic site but. And so it's going to be the classifying top is is going to be localic over the pre chief topos involving the sorts and the signature. And so we need to describe what the sort of internal locale is going to look like. And for that, I'm going to introduce this notion of substitutive syntactic sites. So here, given our geometric theory we're going to give it an internal locale of the pre chief topos. So it's going to be a functor from the opposite of sorts to like house, and we're going to associate with every context every string of variables X. We're going to associate it the locale given by the formulae in that context, ordered by syntactic proof. And whenever we have a relabeling, we're going to send that to the locale with corresponding frame homomorphism, given by relabeling under the. Sigma. So here in this notation, the frame homomorphism corresponding to us localism is going to send formulae sign context why to the formulae in context X where we substitute every instance of the variable why by, every instance of say like why I in this vector why we're going to substitute for why I sigma of why I, which is now going to be a variable in this context. And since we assume all of our context to be disjoint there's no issues with that there's no issues in the order in which we relabel the variables. So this is going to be associated internal locale how do we know that it's an internal locale. It's an internal locale because of this pre chief topos and the back Chevrolet and Frobenius conditions basically are saying that existential quantification behaves well with substitution. They're quite easy to show. And the second remark I want to make is that the left and right adjoints. I mean that this this is a wide actually and for being a hold is that the left and right adjoints of our frame homomorphism are given by existential quantification and universal quantification. So this is the. The internal sheaves are going to be closely related to the classifying topos and indeed we're going to see that they are. So yes when we take the relative construction when we take the grotten dick construction of this internal locale, which, if you can refer back to Olivia's lectures for the definition of this category, but in this in this particular instance we get a very nice description that it's going to be objects, formulae and context and the arrows in this category are going to be relabelings of variables. Yes, so if we've got a an arrow between fine context X and sine context Y is going to be a relabeling in the other direction such that we have this. And holding in our theory. And so coming with it, we end up with a family of arrows in this, a family of arrows in this category being covering if and only if we have. And so because this is the description of the grotten dick construction and we induced apology. We have that the sheaves on the on this internal locale is going to be equal to the sheaves on this site. And so when we look at this, it looks very much like the usual syntactic category just that we've got, we've got the same objects but we've got a lot fewer arrows, and sometimes our arrows will be identified. And I should also mention that this topologies are generated by two species of covering families and so this this corresponds to a fact that we've got horizontal and vertical covering data coming in from our relative topography. So, so indeed we end up with this site being a dense sub sorry, having a dense morphism of sites to the usual syntactic site. And so the dense morphism of sites is going to be given by this functor eta, which sends a formula and context to itself, and it's going to send a relabeling to this provably functional formula here. And so we get that this is a dense morphism of sites and so if that's the case then as a corollary, we get that this is a alternative description for the site of a classifying top of the theory T. Just to recall what a dense more morphism of sites is, it's a function, sorry, a functor from C to D, satisfying these conditions, SSJ covering if and only if it's images K covering for every D we have a K covering family of morphisms to D. And whenever we have an arrow between f of C one and f of C two, that, you know, it might not necessarily come in the image of F, then we want to show that it can be its densely generated by arrows in the image of F. And whenever we have two arrows being identified this is the fourth condition. Then, whenever we have two hours being identified by F, then there exists a J covering family of arrows, which exhibit their equality. So we have a dense morphism of sites and we've got an equivalent to categories. So let's show that it is indeed a dense morphism of sites. So the first one is immediate by definition, because the, the grotten dictapologies are so similar. And the second condition that it has a covering SIF onto every object follows immediately because it's subjective and objects. It's the next to which are less immediate. So let's suppose we have an arrow coming from a provably functional formula in our syntactic category, which might not necessarily just be a relaying of variables. So let's say we consider the diagram here, where we take the formula witnessing this provably functional formula. And now we have due to all of the, due to all of the sequence that a provably functional formula has to satisfy, we have an arrow going to x spy, and it's composite we can form the composite as the fog showed us in the previous series of lectures. And now we just need to show that we need to show that this is coming from our substitutive syntactic site is dense and this is also coming from our substitutive syntactic site. But this is immediate from all of the extra sequence that provably functional formulae have to satisfy. So this arrow is covering because we have this sequence holding, and this arrow is in the image of ETA by the equivalence of these two formulae. And for the final condition if we've got to relabelings such that they eat associates the two of them, then we're going to have that five proves that Sigma of y is equal to tau yi for each yi in our context. And now we need to find a series of need to find a covering family that witnesses this equality. But this comes from this commutative diagram here, where this is the co equalizer, because of course all of these relabelings going the opposite direction. And this is the co equalizer of these two relabelings. And this is this is the projection. This is the map given by projection from this to the co equalizer. And we want to show that it's a covering arrow. So, the required sequence, namely this can be proved easily from the fact that since these two areas up here are identified that we have this sequence holding for every yi in our context. So we do indeed have a conclusion we do have indeed have a dense morphism of sites, which is nice because now we have a nice way of talking about what the syntactic site is in terms of a localic morphism. So just quickly, I want to give some verifications of previously known results. So for example, if we've got the visa results from theory sites and top assist if we've got T dash being a localic expansion of another theory T, then these are both localic expansions of the, these are both localic expansions of the empty theory on the source of the signature. So being a localic expansion, we end up with a morphism of locales which we can construct very in a hands on way. So we've got a morphism of locales which is internal to the pre chief top us sorts on sigma interstates. And so because we've got a morphism of internal locales, we end up with a localic morphism between the two top losses. So we verify our motivating result from earlier, but potentially more interestingly, a quotient theory T dash of another theory T is going to be a theory in the, this is a definition is a theory in the same signature that proves all of the axioms of T. So we're able to sort of recognize the other headline result from theory sites and top assist but there's a correspondence between quotient theories and sub top boy. I mean, that's, that there's more work going on here than is immediately apparent on the page but we can appreciate the, where the correspondence is coming from the sub top boy of a classified top boss, again to correspond to inclusions of internal sub like house, which, in turn, again to correspond to quotient theories. And another result you, which was, I believe, joy out. First of all, it appears in joy and journey and secondly, joy mentioned it at his talk at the top assist at IHS that you can, since we've since we have that every theory is localic over the this pre chief topos here, we can use the fact that every theory is morita equivalent to the to a theory with a single sort to get that every theory is localic over the object classifier, which is a very nice result that yes as mentioned originally originally appeared in Shoei Anteani. And so finally, I want to go all the way back to the beginning where I mentioned that we can think about toposes as telling us the information that we're missing out. And so we had this substitutive syntactic site. Now, where do the provably functional formulae come in? They come in when we take a syntact, yes, they come in here where the syntactic category is going to be the full subcategory of the representables in the classifying topos. And then additionally, the syntactic topology is going to be the restriction of the canonical topology. So the way to see this is that because the topology is subcanonical, CT has to be a full subcategory and indeed it has to have the same objects as the representables. So this is where our extra arrows are being added. They're being added on the topos theoretic level. And yeah, indeed we could also go the other way. If we didn't know that the topology was subcanonical, we could define the syntactic category as being the full subcategory of representables. And then we would have that the induced topology, the syntactic topology would then have to be subcanonical. And yeah, and then in Olivia's dance in us on the graph, there's a description of how we can do this in an elementary way. So yeah, thank you for listening. That's my talk up. Thank you very much.
In this presentation, a substitutive syntactic site for the classifying topos of a geometric theory is introduced. This construction is understood in light of the relationship between localic expansions and internal locales. The usual syntactic site is recovered as a topos theoretic completion of the substitutive site.
10.5446/55176 (DOI)
Please come on! I really like my friends are here so they clap. So first of all I must say this is kind of a remix of a talk I gave at the World Plane Day in 2013. The presentation can be found in portuguese so you can compare at least the numbers to see how the other Plane Foundation is doing over time. So first of all what is PLAN? I always love to answer that question because people especially in Brazil they're like okay PLAN is like the software. Yeah it is but PLAN is kind of a trinity. PLAN is a software PLAN is a community PLAN is a foundation the software is not a community community is not the foundation even though the foundation is basically there to help the software and the community so everything together is PLAN and life was easier because we at a certain point we only had PLAN software nowadays we have many other nice kids in the block so we need to take care so instead of software we could say code base or intellectual property but PLAN is that. PLAN as a software mature CMS enterprise CMS 18 years old can drink in most countries it's amazing version 5.2 we love it but then Eric kind of mess with me and he said oh PLAN is probably going to be an API at some point a contract like a set of ideas and we go from there thank you Eric for this and it's a good way to evolve especially because from what we had in the past to what we have now has been quite a ride and of course we evolved we are following and in some case we are leading changes in this content management spectrum but for me it's really important PLAN is a community PLAN is a community that's online that's offline that's generous that helps people we are talking to each other over IRC, Gitter, some of us in Slack some like five of us in Slack we meet at conference work PLAN day we go to PLOG every year we have bohata we have wine we love to drink together and there I am you see I it's one of my challenges try to find myself in pictures of PLAN conference it's always a disappear in the background but also as I said PLAN is a foundation the foundation that's formed by community members it's a 501c3 non-profit registered in the US we must follow US rules so every time we meet we discuss anything there's a protocol to be followed we are quite strict with that we spend every time we spend money we need to explain why and so on so forth and that exists to promote and protect PLAN and that's me as well this is a picture of 2012 in Arhane it's the board from 2011 with the board of 2012 it's the largest group of board members together in a single picture that I could find but more important about me I'm using PLAN since 2004 I am part of the community especially the international community since 2006 before that I was only caring about speaking Portuguese getting customers on walking distance from my office and so on and since 2010 I am a foundation member besides that I'm I have 25 years of IT I have gray hair as my son told me the other day oh you have gray beard so are you going to die soon daddy yeah I'm getting that old but I did many things I work for Microsoft a long long long time ago I had Simplest I'm a Python software foundation fellow and right now I'm founding my company called PENDAQT please visit the site PENDAQT.com and here I am so PLAN Foundation exists to promote and protect PLAN of course that was easy in the past now what is PLAN we're talking about the community we're talking about the intellectual property PLAN is not only the CMS we have now Guillotine at the web framework we have Zope all the intellectual property of Zope is now under the PLAN Foundation and maybe pylons I don't know if Steve is here I I would love to have pylons here as well but the idea is to provide a framework to make things happen for the community what is really really really important is to say what is outside the scope of the PLAN Foundation in the past when I gave this talk the first time 2013 I was the vice president of the foundation and I also had my own company in Brazil so every time I would talk about the PLAN Foundation I would say PLAN Foundation do not offer services the PLAN Foundation do not endorse any company because could be seen as a competitive advantage for some people and so on and for instance the PLAN Foundation do not offer trainings like official trainings available as paid for other people we have the community generated trainings that are available there and if Rodrigo decides to give a training he can do a training by himself another important point the PLAN Foundation do not lead the community do not lead the development of PLAN and from time to time people ask me oh what's the plan for the roadmap of PLAN from the PLAN Foundation PLAN Foundation as the foundation do not decide as most of the as everyone on the PLAN Foundation is also part of the community we have opinions we go and we are part of certain teams but other than that we are not steering we are not making the decisions if PLAN is going to be a headless CMS or not it's a community decision okay the foundation especially the foundation board has some liaisons liaisons to some teams just to provide a shortcut and to help speed up communication we are not there to to direct or manage but for instance security team alexander was the liaison for the past 25 years right he was on the board and if there was any announcement he was the person that would get in touch with the board and pass the the news we have the same thing to other teams like framework team but just the capacity of improving communication on the other hand what it does decision-making structure for community the PLAN Foundation exists to help the community to make the community work better to provide tools provide money so we manage the copyrights we manage trademark we deal with domains you have no idea how much time we spend dealing with domains it's like if you read the minutes it seems like okay they fix the chili domain and this is something that took time to be sorted out and talk to people and so on so forth and in general all intellectual property related to PLAN it's there and we are taking care we take care of coding a code licensing so from time to time someone from the framework team or other part of the community has okay we want to re-license this part of the code as bsd do you remember Matthew or Alexander what was the last part of less component we re-licensed yeah so front-end code to MIT this is the kind of stuff the foundation takes care of course we also coordinate marketing and communication this is really important for for the survival of PLAN we do fundraising so from time to time there's someone from the foundation asking for donations and trying to get sponsorships to help us to to fund the Sprint's events community infrastructure for instance we have our own servers to to have the website the conference site and so on so forth this money comes from sponsorships and donations from the community and of course when I won last thing I was talking to Alexander earlier and he reminded me when we talk about co-price trademarks for instance no company should be named PLAN solutions in the past we had PLAN solutions and they renamed themselves and I have no idea if that's going to be the case but it would be fun no company should also be named Zope Corporation anymore right we had that and it's over the foundation has two teams slash committees one is the communication and marketing team so you have a link in here for for to know a little bit more about that it was led by Kim over the past few years as well and they are the ones making sure oh there's going to be a new event PyCon US or EuroPython these are the marketing materials these are this is how we are going to to present ourselves we are going to talk about being the Pythonic CMS or being Python tree Python tree enable and so on so forth and also we have the membership committee that was led by me last last time and before that by Kim and Philip that selects new foundation members and talking about that the foundation does not exist by itself it's not entity like a phantom entity that has like a board of people taking care of some stuff in some shady place we need volunteers we need people from this room we need people from the community to to make it happen and here is the important stuff every single member member of the foundation came from the community so the first thing I'm going to ask here how many of you are foundation members at the moment okay okay it's hard to see in the back there's a hand there really really important every time we asked that question we got some surprise and expletives like how come you're not a foundation member yet my last huge surprise was with Fuvio I know Fuvio since I'm part of the community and we were at blog and we asked who's not a community member he raised his hand on kind of what the fuck how come so it's really important people usually think that to be a foundation member you need to be a core developer you need to devote like all your time and your firstborn to bloom it's not like that except for the part of the firstborn but it's really important you have to be a community member you have to have some significance and enduring contributions so it's like okay I worked with Plon for three months I worked a lot am I eligible yeah probably not yet but I've been working with Plon for 15 years doing stuff helping documentation organizing events you should be a foundation member and in here I let me just add something else I come from Brazil and there in Brazil there was this perception that okay when we talk about communities only the English speaking community global one and it makes no sense because in the end there are people in every region that does they do a lot they help they make us stronger they work with Plon for years and never be never were part of the global one so if you contributed to Plon over time for in your country you are also eligible you need to write a nice application you submit it to the membership committee we are going to review it especially if you come from a place and you you are kind of afraid of people do not knowing you put the name of other foundation members that can vouch for you right because that way it makes our life easier and then we get the foundation the membership committee votes and we make our recommendations to the Plon foundation board and then the board approves or not usually if the membership approves someone is going to be approved by the foundation board but we had some cases in the past where we politely asked the the the nomination to be a little better written so what does it give you besides the secret handshake we have in here so Fulvio is going to learn his probably tomorrow it gives you a voice in the Plon community in the Plon foundation it gives you right to vote on important Plon foundation decisions like okay who is going to be part of the next board where the next conference is going to take places take place Plon conference it's over the last few years we usually had just one location so there's no vote but I remember my first time as a foundation member we had a election to select the 2011 Plon conference place and it was between San Francisco everyone remembers right it was one of the first time I spent time with Matthew and I remember Alexander there I remember Fulvio during the breakfast of to kind of onboard new people and the contender for that year was on Paolo my city and it was a really close call we had this planning of okay we're going to have the Plon the Plon conference in three years in Brazil we presented our bid and foundation membership went for a vote and we lost by really small margin to the point I was afraid so next year 2012 I said okay I'm not putting a bid God knows what happens if we win but that's the beauty of democracy you can always win membership needs to be renewed early and here it's an important thing right now we have 93 active members in the Plon foundation 2013 when I gave the same talk we had 127 basically what happens is every year in September we send a one month before the the Plon conference we send a reminder so you go to Plon.org and basically say okay I still want to be a member of the Plon foundation renew my membership and so on and over the last few years we had more people not doing that so we moved from active members to emeritus members basically they are important they helped us in the past but they are not active anymore so they cannot come in here and vote but if for some reason you forgot to renew your membership you just ask the board you ask the secretary and it's usually reinstated quite soon right now we have 93 active members 25 different countries and most of us in Europe then US Latin America in Asia falling behind wild guess besides us what's the country with the most number of the most number of members any guess oh good one after Germany Netherlands oh you saw the slide man so we have 21 from US 12 from Germany and 10 from Netherlands 7 from Brazil and I kind of cheated here because two Brazilians they have on the on the side that they're living on Berlin so I basically okay just to increase the number of Latin America moved to from Europe to there important thing we have 144 emeritus members some really important people in the python community we have Paul Everett we have Luciano Hamalio that wrote the fluent python book so these people were important foundation members in past they were active and now they move on and are doing other things so consider joining to help us keep growing about the board of directors it's composed by seven elected members and one treasurer the treasurer is no voting but it's the person that takes care of our finance reports and so on you do not need to be a foundation member so for instance let's take Huda Huda is not a foundation member but he wants to volunteer to be a board member you can nominate yourself for 2020 and not this election now but for the next one and then you can serve on the board we have some really important prone foundation members that started being board members and after that they were accepted as foundation members it's usually a one-year term from one plan conference to the next plan conference it's a working board we meet every two weeks over hangout skype zoom or whatever takes between one hour to one hour and a half every meeting we basically discussed marketing communication intellectual property eventually we talk about code of conduct it's something that thanks to this amazing community is has been a while we had any discussion about that and it's really important that if you are a foundation member elections are happening right now there's a link in here but to make it easier you can grab Matthew and give your vote Matthew raise your hand yeah and basically you can give him your vote for the for the board we have eight candidates seven positions and this is an example of a board meeting really well behaved one because no one is drinking this day but first of all I would like to thank Alexander because he did some archaeology of the Plume Board and since the foundation started in 2004 we had 43 people working as board members we had 37 males and six females and usually people stay around three years 2.81 terms of course we have people that serve for one board a one bird term and then move on and we have people that stay longer than that but usually it's something that you stay for two three years and then you do something else it's really fun it's one of the most rewarding experiences I had in the Plume community it's really a pleasure talking to to this to these amazing people every week I remember at some point Elizabeth Leti was part of the board same board that Matthew was and remember she gave a talk at the Plume Canada at Python Canada conference and she said that working with the Plume community was amazing because we were the first people she talked in the morning and the last people she talked every night and we had a really diverse board and it was really fun and that is really important for you to know it's really a pleasure to be part of this board so be part of it apply to become a member of the foundation please do it if you know someone that's not a foundation member and should please please twist their arm or basically approach me and I will do the arm twisting if you're already a member vote raise your voice and for everyone in here tomorrow at 1 30 in this room we are going to have the membership meeting we do that every year so it's where we are going to announce the new board it's where we are going to present the results from the 2018 2019 board year and everyone is invited of course only foundation members can vote only foundation members can make motions also volunteer to one of our teams the marketing team is open for everyone the membership needs it's open only for current foundation members and please consider running for the board in 2020 it's really important that we have new people on board it's really important that you take some time and help us in a different way it's not only code and documentation and one last thing I would like to thank Red Turtle for organizing this conference it's really amazing you guys are awesome and has been a while I know Red Turtle for quite a long time I used their products for quite long time before I became like a plant foundation member or a contributor and I deeply admire you guys and thank you these are my contacts and this is the twitter account of my new company Pendect please go there follow us and this presentation will be on slideshare and this is a nice picture Kim took of me in 2014 where I was using my old macbook as a support for the the to keep my food warm so that's how much I love my computers thank you all if you have questions if you do not have questions it's lunchtime right and I have my computer if you want someone wants to warm their plates questions we have a question here hello sir stay to your name I'm Rodrigo the plant foundation board meetings are open for everyone to see no we open the meeting once a year so people understand how we work but you can follow what was discussed even if you're not a foundation member on the plant org slash foundation slash meeting minutes you can go there you can see every single decision the plant foundation made since 2004 everything is there okay and you can always also always see the agenda for the next meeting because we publish two or three days prior to the meeting and usually the meetings take place on Thursdays around uh nine nine no eight UTC
Eric Steele began his Plone career with Penn State University's WebLion group, where he authored several widely-used Plone products, including GloWorm and FacultyStaffDirectory. In 2009, Eric became Plone's release manager, overseeing the Plone 4 and 5 releases. By day, he works for Salesforce, building testing and release automation tools to support corporate philanthropy and employee volunteering efforts.
10.5446/55534 (DOI)
Okay, thank you very much. So I had just ended to present the notion of language or signature of first order theory and the interpretations they may have in the topos or more generally in categories with finite products and these interpretations are called sigma structures. So by now I want to come to the notion of first order theory. So first order theory consists first in a language or signature sigma as before. And secondly, a collection of axioms. And these axioms have to be formalized and they are formalized as so-called sequence. So this is denoted in this strange way, which is familiar to logicians. So you have two letters, phi and psi, and these two letters are related by this strange symbol, which just means an implication. And phi and psi have to be formulas, so more precisely geometric formulas. So the meaning of such an axiom is that one formula, phi, has to imply another formula, psi. So for instance, if you think about the theory of order relations, one of the axioms is that if A is lesser than B, and B is lesser than C, then A is lesser than C. So here, phi is the formula A lesser than B, and B lesser than C, and psi is the formula A lesser than C. And in fact, all the axioms of first order theories can be phrased in this way, implications between formulas. The formulas which appear in such theories are geometric formulas, and they have to be formulas with variables. So you see that in the example I just gave, there were variables A, B, and C. So here to write formulas and implications between formulas, we need variables. And each variable has to take values in some sort. So this means that any sequence has a context consisting in a finite family of variables, and each one of them is associated to some sort. And of course, we may have several variables with the same associated sort. Okay, so by now we want to make precise how formulas, phi and psi, are built. So geometric formulas are built from more elementary formulas which are called atomic formulas. So at the present moment, I don't say what is an atomic formula, it will be on the next page. I just say that the general geometric formulas are used from atomic formulas using three types of symbols. So first, a symbol of finite conjunction. So for instance, in the example I just said, if you remember, there was the formula L is certain B and B is certain C. So here you see there is a hand. Okay, and this is the symbol of finite conjunction. But we also allow arbitrary disjunction. When I say arbitrary, I mean that it can be infinite, but of course indexed by some set. And of course there is also the empty conjunction, which is true symbol, and the empty disjunction, which is a false symbol. So they are denoted in this way. And lastly, there is also the existential quantifier in part of the variable. So we are used to do that all the time when we write mathematics. Okay, so by now, what is an atomic formula? So an atomic formula is deduced from a relation formula. So a relation formula is the relation symbol, often in the context of some sorts, A1, An. But it is considered as a formula in some variables, X1, Xn, affected, associated with the source A1, An. And we also allow equality formulas between variables with the same associated source. So an atomic formula is either deduce, it has either the form of inequality or of a relation associated with a relation symbol. But we also allow substitution of variables by terms. So what is a term? A term is just a function symbol in some variables or a composition of such function symbols. So the notion of term is defined in an inductive way. You start from a function symbol F in some variables X1, Xn associated to the source A1, An of the function symbol. And you allow to replace one of the variables by another function symbol, so G, in some variables. But of course, you have to respect the fact that the function symbol G takes values in a sort Ci, which is the sort of the variable you have decided to substitute. So here you see it is just a formalization. We want to make precise what we do all the time. In mathematics, we are always using variables, and we are used to substitute variables with some more complicated expressions. And these more complicated expressions, ultimately, have to come from some function symbols. Okay, so here we are just trying to write in a precise way what we are used to do all the time in mathematics. Okay, so here is a remark, which is the fact that, of course, in mathematics, even in first order mathematics, we may use some other symbols than the symbols which were listed in this definition. So these other symbols, which we are used to employ are first arbitrary conjunctions, not only finite conjunctions. So arbitrary conjunctions, we already said, existential quantifiers, we already said, but we also use universal quantifiers, meaning for any variable. And also the implication symbol. And the negation symbol. When we have a formula, we sometimes we decide to take the negation of this formula. Or if we have two formulas, phi one and phi two, we can consider the formula phi one implies phi two. Okay, so all of this has meaning, it can be interpreted, for instance, in the context of that theory. And here, in fact, we have a full list of the symbols which are used in first order mathematics. So for people who don't know, you may ask what is higher order mathematics. Higher order mathematics is when you also allow exponential operations, such as replacing two sets by the set of maps from the first one to the second one. Replacing a set by the set of subsets of the set. So these are exponential operations on the first order mathematics are not alone. So here's a theory we are talking with is a theory of classifying toposies for first order mathematics. In fact, here I can mention that if you want to take into account higher order mathematics, then you have to consider relative toposies. I mean a topos over one topos, possibly over another topos and so on. And this is somewhat presently done by Olivia Caramello under student Ricardo Zanfato. And I think they will talk a little about it. Olivia in her lectures on Ricardo in his talk next week. Okay, but for my lectures, I only talk about first order mathematics. Here I want only to repeat what I already said is that if we have any first order theory, including with this other symbol of universal quantifier, negation, then we can replace it with by a geometric first order theory, which means with only finite conjunction arbitrary dejanction on existing control quantifier without changing the set theoretic model. Okay, so this means that this condition to be geometric is not restricting that. And in fact, to make a theory geometric, it is, as we shall see, a way to make it topos friendly. And in fact, when we are doing mathematics, it is a very good idea to formulate the theories you are interested in, in a way to make them topos friendly. Okay, so by now I have explained what is first order geometric theory, and I have to talk about the way such a theory can be interpreted. I have already explained how languages, which means signatures can be interpreted. So here, let's consider a signature. And first, let's consider a term. A term is just a function symbol in some variables or a composition of such function symbols. And let's consider a topos. And let's consider a sigma structure M in this topos. So what is the interpretation of this term? So the interpretation is also defined in an inductive way. It will be a morphism from the product of the objects associated with the source, which are associated to the variables, and it takes values in the object associated to the source, where the function, the term f takes values. So if f is a function symbol, the interpretation of f is just the associated morphism, Mf. Mf by definition is Mf not. Okay, when you have a sigma structure in particular, the sigma structure consists in associating to any function symbol, a morphism. And so this is the interpretation. And by now, if we have a term which is deduced from a simpler term by substitution of a variable by a function symbol, then the morphism Mfk, the interpretation Mfk is deduced from the interpretation Mfk minus one. Just by composition with the interpretation of the function symbol g. So you see the definition is inductive and here we just use a categorical structure. We just use the fact that it is possible to compose morphism in a category. Okay, so by now we can move to the interpretation of atomic formulas. So let's consider once again a signature, then a topos, then a sigma structure M, and an atomic formula. So let's consider also the case of the formula true and the formula failed. So if the formula is true in the variables X1A1, XNAN, its interpretation by definition is the full sub-object Fm1 cross MAN. If the formula we consider is false, its interpretation is by definition the smallest sub-object of M1 cross MAN. If you prefer, it is an empty sub-object. Okay, you see, I am just saying things which are obvious in the context of sets. But in fact, from the point of view of categorical structures, a topos is just as good as the category of sets. So everything which makes sense in the category of sets makes sense in the context of an arbitrary topos. So by now, if we consider an atomic formula, which is just a relation symbol, R in the context of some A1AN, its interpretation by definition is a sub-object FmR of M1 cross MAN. By now, if we consider an equality relation between two families of variables, then its interpretation is just a diagonal sub-object M1 cross MAN embedded diagonally in M1 cross MAN cross M1 cross MAN. And by now, if the term phi k, excuse me, if the atomic formula phi k is deduced from a simpler atomic formula phi k minus 1, phi k minus 1, then by substitution of a variable by a function symbol, then the sub-object M phi k is deduced from the sub-object M phi k minus 1 by pullback by base change along the morphism associated with G, the interpretation MG of G. So, and of course, the pullback of a sub-object is a sub-object. You see, everything I have defined here is a sub-object. If phi is a true formula, the interpretation is a full sub-object. If phi is a failed formula, the interpretation is an empty sub-object. If phi is a relation symbol, the interpretation is associated sub-object just by definition of a sigma structure. And if phi is an equality relation, the interpretation is a diagonal sub-object. And then when we substitute from these basic atomic formulas, when we substitute variables with function symbols and do that repeatedly, at each step, we just take the pullback. And the pullback of a sub-object is always a sub-object. So these things, atomic formulas are always interpreted as sub-objects. Okay, and by now we can obviously interpret the whole atomic formulas. So let's consider once again a signature, a topos, a sigma structure in this topos, and a geometric formula. In the context of some variables x1, xan, x1an, xan. And then the geometric formula is always interpreted as a sub-object of ma1 cross man. So let's consider the different cases. So the first case is when we have a formula phi, which is deduced from some simpler formulas phi1, phik, by finite conjunction. So we, these simpler formulas phi1, phik are interpreted as sub-objects, and phi1, and phii of ma1 cross man. And in order to interpret phi, we just take the intersection of this sub-object. So the intersection is just a pullback. It is a finite limit operation. So of course it perfectly makes sense in the context of the topos. Then if phi is defined by arbitrary union of some family of formulas phii, so each phii is interpreted as a sub-object, m phii of ma1 cross man, and we just take the union. The union is by definition the smaller sub-object which contains all of them. And here we remark that this union, for this union we have a formula. And the formula is a call limit formula. So the call limit of the diagram I have written. The diagram consisting of the interpretations, m phii, and their fiber products by pairs, their pairwise fiber products. So you see that in this formula, there are only finite limits and call limits. It only makes use of finite limits and call limits. Okay. And lastly, we need to interpret the existential quantifier. So then we have a formula psi in two families of variables, x1, xii, yj, bj. And then of course, m psi is interpreted as a sub-object of ma1 cross man, cross mb1, cross mbn. And this big object has a natural projection on ma1 cross man. And here the interpretation of the existential quantifier is the fact that we replace this sub-object m psi by its image in ma1 cross man. And here, so the image by definition is the smallest sub-object of ma1 cross man, whose pullback contains m psi. So this is the definition, but because we are in a topos, this image is given by your formula. The formula is, I have written, it is once again the call limit of the diagram consisting of m psi and the fiber product of m psi with itself over ma1 cross ma. So this formula is obviously true in the context of sets, and it remains true in the context of an arbitrary topos. And this is very nice because you see for the interpretation of the universal, the existential quantifier, we, once again, we only need finite limits and arbitrary call limits. Okay. So this is a remark at the bottom of this page. Because we are in topos, we remarked that all these formulas, which only make use of finite limits and arbitrary call limits, are always preserved by base champ. This is because in a topos, base change not only respects limits, but also arbitrary call limits. And they are also preserved by pullback factors by the pullback components of morphism of topos. Because by definition of a morphism of topos, this F from E prime to E, the pullback component respects arbitrary call limits on finite limits. So this means that all the interpretations of formulas are respected by pullbacks, and they are also respected by the pullback component of morphism of topos. And so we can remark that in fact, the other symbols of first order mathematics, so arbitrary conjunction, universal quantifier implication, negation, they are also interpretable in any topos, just as they are in any set, in the context of set. And once again, from the point of view of categorical structures, a topos, a grotendic topos is just as good as the category of sets. And the interpretations of these formulas are always respected by base change. If we have a topos morphism from F, F from E prime to E, the pullback factor between the categories of sigmas does not respect in general the interpretations of these symbols. They do not respect arbitrary intersections, they do not respect universal quantifier, they only respect the symbols which are allowed in geometric first order theories. So this is why when you have a first order mathematical theory, if you phrase it in a geometric way, you can say that you have made it topos friendly. Okay, so by now at last we can talk about models. So let's consider a signature, let's consider a geometric first order theory. And let's consider a topos. So the definition, a sigma structure in this topos, M, is called a model of T, so here I have made a mistake, T is a theory, it's not the symbol of true. Okay, so it is a model of T, if for any axiom consisting in a second between two formulas, phi and psi, the sub-object, which is the interpretation of phi in M, and the sub-object which is the interpretation of psi in M, and this of course has two sub-objects of the same object, so they have to verify the inclusion relation, the interpretation of phi is included in the interpretation of psi. So when this is verified for any axiom of the theory, you say your sigma structure is a model of the theory. So a morphism of models of the theory is just a morphism of the underlying sigma structures. So this is a definition. So of course, in particular it applies to the category of sets, so in this way you get the notion of a set theoretic model or set based model of a theory, but it makes sense in the context of an arbitrary topos. Okay, and so with this definition, we realize that the models of the theory T in a topos make up a category. This is by definition the full category, the full sub-category of the category of sigma structures on the T models. So the morphisms are the same, they are just morphisms of sigma structures, but the objects are the sigma structures which are models of T, which means where all the axioms of T are verified. Because the theory you consider is geometric, and so the pullback components of topos morphism respect the interpretation of geometric formulas, then we get that the pullback from Thor F per star between the categories of sigma structures in uses a functor between the categories of models. And so by now, if we consider an arbitrary geometric theory T and an arbitrary model of T in a topos E, there is for any topos E prime, an induced functor from the category of topos morphism from E prime to E, to the category of models of T in E prime. So the functor just associated to any topos, to any topos this morphism F, the pullback of M by F. Okay. And then the definition of the factifying is what you require for a topos to be classifying the theory T is the fact that this functor has to be an equivalence of categories for any topos E prime. Okay, so what we want is that for any topos E, and over the for any topos E, the functor which associates to any topos morphism from E to E T, the pullback of the model ut of T in ET. So this we want this functor to be an equivalent of categories. And here it is a formal consequence of everything of the just the meaning of the statement that if such a couple ET, UT exists, it is unique up to equivalence. So the existence of such a topos is a theorem and it takes some time to prove it. Okay. And so by now I want to begin the proof of this, I want to begin the proof of this theorem. So I repeat what we want to do. We start from first order theory, which is presented in a geometric way. We start from a geometric first order theory T. And we want to associate with the topos. But toposists are always constructed from a pair consisting of a category C, and off with the grotendic topology, J. So grotendic as called that a site site is a pair consisting of a small or a socially small category, and the with the grotend with the topology. So what we need is in order to construct a classifying topos ET is to define a category city together with topology JT on this category, so that the associated topos ET, so the quotient topos of the category of the topos of issues on CT by the topology JT. So this topos has to answer the question. So, more precisely, we shall get the following. So in order to construct a topos, we will construct a category CT with enough categorical properties for the category of models of CT to be defined. So CT, be careful, CT will not be a topos. It will be much smaller. In fact, it will be an essentially small category. But it will have enough categorical properties for geometric formulas to be interpretable into this category, and so for the models to be defined. And in these categories, there will be a canonical model, empty, and as I already said, topology JT, such that denoting ET is a classifying topos, and UT the image of empty through the canonical factor, you see the canonical factor L, which goes from CT to the category of preships of CTs, so CT-arts, who you're not embedding, composed with the shiftification factor. And so, with this definition, we will get that the function for any topos E, of the category of topos morphism from E to ET, to the category of T models in E, so this function to be an equivalent for any topos E. So this is what we want to do. Okay, and I just said that this category CT has to be good from the, we need for this category enough categorical properties for geometric formulas to be interpretable in this category. So here we give a definition of what we want. So we call that a geometric category. So geometric category is a category where geometric formulas can be interpreted. So what do we need? First, we need this category to have all finite limits. In particular, this means that for any morphism in the category, P from X to Y, there is an induced factor on the associated categories of sub-objects. So here you see I have denoted omega of Y and omega of X, the categories of sub-objects of X and Y, because in any category with finite limits, if you consider a sub-object of Y, let's say Y prime. Then the fiber product of Y prime with X over Y is a sub-object of X. So in this way, we define a pullback from Thor from the category of sub-objects of Y to the category of sub-objects of X. And what we want is this pullback from Thor to have a left adjoint. And this left adjoint will be the interpretation of the existential quantifier. So we denoted in this way with the existential quantifier and P as an index. Of course, it has to go in the reverse direction from omega of X to omega of Y. And by definition, it has to be an adjoint of the pullback from Thor. And of course, this uniquely determines this factor. And we also want this, this font of image, this font of there exists index P to be compatible with best charge. So this means that for any Cartesian square, as I have written at the bottom of the page, the associated square with, you see at the horizontal level, you have the pullback from Thor on sub-objects. And the vertical rows are just the existential functions. And so we want this square to be commutative as soon as the square on the left is Cartesian. Okay, so this is part of what we want for a category to be geometric. But we also want the arbitrary adjunctions to be interpretable. So this means that for any family of sub-objects, as I have some object, why? Then this family of sub-objects as a well-defined union. So what is a union? It is a sub-object such that for any subject S of Y, the union is lesser than S, if and only if each SI is lesser than S. So we want this to exist for any family FSI of sub-objects. So when it does exist for any such family, we say that unions are well-defined in our category. We want not only unions to be well-defined, but also to be compatible with best change. So this means that for any morphism from X to Y, we want the pullback of the union to be equal to the union of the pullbacks. Okay, so a category will be called geometric when it verifies all these properties. And in fact, we have already remarked that any topos is a geometric category. And by now, when we have a function between two geometric categories, we say that this function is geometric if it respects first finite limits, then the existential functions. So these things that for any morphism from some X to Y, for any morphism P, and for any subject X of X, then the transform of the image of S by the function F is equal to the image by the morphism F of P of the transform of the subject S by F. Okay, so we want this property to be verified, and we also want F to respect finite unions. So if these three properties are verified, we say the function is geometric. Okay. And by now, we remark that if we have an arbitrary signature, and the geometric category, and the sigma structure M in this category, then any geometric formula in a family of variable phi with associated source a1 and n is interpretable as a sub-object M phi of X of M a1 cross M an. In fact, we have already done that in the case of topos. And here in the previous definition, we have really defined the notion of geometric category in order for the interpretation process to be possible. So there is such an interpretation. And as the geometric formulas are interpretable, it is possible to talk about the models of a geometric theory T in such a geometric category. We say that the sigma structure M is a model of T if for any axiom of the theory phi implies phi, then the sub-object M phi on M psi verifies the inclusion relation M phi included into M psi. And so the notion of model is well defined, and we can define the category of models of T in C as the full category of the category of sigma structures in C on models of T. And we remarked that if we have a geometric functor between two geometric categories, it in uses a functor between the associated categories of models. In fact, all the definitions were written to in order for this to be verified. Okay, so by now we have the following theorem. Which is true for any geometric first order theory. And the system says that we have what we want. The following property is verified for any category C. If we associate to a J any geometric functor F from city to see. It's the pulver the image of the model empty by F, which is well defined as a model of T in C. Then this is an equivalent of categories for any geometric categories C. On the second part of the statement is that this couple city empty is well defined is is uniquely determined up to equivalence. Okay, so this means that at the level of geometric functors and geometric categories on geometric functors. The functor of models. The functor of theory T is representable. It is representable by a special category called the syntactic category on the with a special model, which can be called the universal model of the theory. Okay, so this is the theorem. By now we can construct it. So let's start with such a geometric theory. Yeah, before I say, before I begin to construct it, there is a remark which is the fact that if this theory, and those with this canonical model exist, then it is unique up to equal. So this is a part two of the theorem and this is formal. Okay, this is exactly as this is a categorical version of the unit of Emma. The functor is representable by some object. This object is unique up to unique isomorphism. So here the functors don't take values in sets and values in categories. So for this reason for this reason the notion of the usual notion of isomorphism in the context of the unit of Emma has to be replaced with a notion of categorical equivalence, but the proof is exactly the same. So this is okay for uniqueness, but the difficult point is existence. And in order to prove existence, we are going to construct this category in a concrete way, which I like to construct such a category. So these things we are going to list to give a list of object, a list of morphism, and the definition for the composition law of morphism. Okay, so let's do it. Let's start with such geometric theory tip. And we have to define the objects, what are the objects of city, what are the morphism of city, and what is the composition law of morphism of city. So here we decide that the object of the syntactic category are just geometric formulas. Okay, so geometric formulas were defined in the first lecture this morning. So we consider all geometric formulas up to substitution of variables. So we, which means that we consider two geometric formulas as equivalents when one is the dude from the other by just a change of notations of the variables. Okay, so this is of course is very not. So then we have to decide what is a morphism. The idea is to define morphisms who's a grass. So if we want to define a morphism from a formula to another formula from an object to an object in a category where of course the products will be well defined. So to define a morphism is the same thing that would define a graph as a subject of the product objects of fire effects on site of y. And the product object is, of course, as to be a formulas in the union of the variables x and y. And when you have such a formula, of course, formula will define sub objects. And what you will need in order for a subject to be a graph are two times three in terms of the value of x and y. So this requirement, this request is represented by the formula of the product objects. So the formula of the product objects is the formula of the product objects. So the formula of the product objects is the formula of the product objects. So this is a requirement. This request is represented by an implication between formulas. We want the formula theta, which has to define the morphism to imply both the formula of phi in the variable x and the formula psi in the variable y. Secondly, we want the projection on the first part on phi of x to be on two in order to be a graph. And this is represented by the following sequence is a formula phi is verified. Then there should exist some y such that the formula theta of x, y is verified. So you see this sequence will correspond to the property for the projection morphism to be on two in order to define a graph. And lastly, we want this projection on the first component phi to be an isomorphism. So it has not only to be on two, but also to be a monomorphism. And so what does it mean? It means that if the property theta is verified both by x and y prime, it has to imply y equal y prime. So this we put as definition of formula theta to be provably functional. So we want this three sequence to be provable in the theory T. Yes, here I see that I have written at the last line are provable in theta. In fact, it is not theta. This is provable in T provable. You want in your theories that these three sequence are provable. So now what will be the composite of two morphisms? So you just have to think about the following question. When you have two morphisms, which are given by two graphs, what is the graph of the composite morphism? And the graph of the composite morphism is given by the formula I have written there at the last line of the part three. We are considering here three variables x, y and z. And the definition of the graph of the composite is the image of the intersection of the graphs of the first and two morphisms. You see, you take the image. So this means that you use existential quantifier on the variable y, applied to the conjunction of the formulas theta and theta prime. And this defines the graph of the composite. Okay, so here we have defined some things. Of course, we have to verify it is a category. And not only it is a category, but it is geometric and it represents the factor of models. So before we do that, we remarked that in the definition. So I come back. You see in the definition of objects, objects are geometric formulas up to substitution of variables. So you see that the list of objects does not really depend on the theory. It only depends on the signature sigma. Okay, it does not depend at all on the axioms of the theory. Okay, this is the first remark. The objects only depend on the signature. But of course, the morphism depends on the notion of probability in the theory T. So in order for our definition to make sense, we need to make precise what it means for a second for an implication between formulas to be provable in the theory. So this is another definition. A second, or if you prefer an implication between two geometric formulas is called provable in a theory of signature sigma. If it can be deduced from the axioms of the theory by the rules of logic. Okay, which we are used to use when we do mathematics. When we do mathematics, we are always making deductions. And in order to make deductions, we use implication rules. And in fact, this is definition we have given the full list of implication rules which are implied. So these are all basic rules and every one of us is used to them. But it is interesting to write them without forgetting any rule. In fact, I think the first one in history to do that, to just list the usual rules of logic is Aristotle. And in the context of mathematics, of course, it was done again by Hilbert. So here are the deduction rules. So first there is a circuit cut rule. If phi one implies phi two and phi two implies phi three, then phi one implies phi three. Okay, then there is the identity rule for any term F true implies that F equals F. So this means that F equals F has to be verified all the time. Then there are the identity rules. If F one is equal to F two, then F two is equal to F one. If F one is equal to F two, F two is equal to F three, then F one is equal to F three. So then there are the substitution rules. If we have two terms F one and F two, and we consider another term F, and suppose that the terms F prime one F prime two are deduced from F one F two by substitution of F to some variable. You see, we suppose that F one and F two are in the same variables. And we decide to replace one variable by F. Or we suppose that F prime one F prime two are deduced from F by substitution of F one and F two to some variable of F. Then of course we have that if F one is equal to F two, then F prime one is equal to F prime two. And the same goes for relations. Suppose we have two terms and a relation. And by now, we have two atomic formulas, one and two which are deduced from by substitution of the terms F one F two to some variable of our If F one is equal to F two, it implies that the atomic formulas are one and two are equivalent. Each one of them implies the other. Okay, so then there are the rules which define finitary conjunction. First, any formula phi implies true. So this anything implies truth. And if we have formulas phi and phi one phi k, phi implies the conjunction of the phi i's if and only if phi implies each of the five i's. And there are the corresponding rules for in for arbitrary vision show. So first, first, Felt city implies anything. And secondly, for any phi and phi i's, possibly infinite, the union of the phi i's the disjunction of the phi i's implies five means that each phi i implies five. Okay, then there is the so called distributive rules, which is the fact that if we apply conjunction with some phi to a formula obtained by disjunction, we can commute the two operations. So fire conjected with the disjunction of the fire i's is the disjunction of the fire of the of the fire i's and the function of the disjunction of the fire i's is the function of the fire i's. So if we have phi in a context x y on the formula psi in the context x, then phi implies psi means that the existential quantifier and the variable y apply to phi. So this is considered as a formula in x implies size. So this is the definition of the meaning of the symbol of existential quantification. And lastly, there is a so called Frobenius rule, which is a fact that existential quantification commutes with conjunction. Okay, so it is written there. And so in one direction, it has been introduced, it has to be introduced as a new rule. And in the other direction, it is a consequence of the previous rules. And by now we have the meaning of provable. Something is provable in the theory, when it can be deduced from the axioms by applying this list of rules. And what is remarkable here is that this list of rules is finite, it is completely explicit. And so we know with that most message, what it means to be provable. And now the syntactic category CT is well defined. I remind you that in the definition of the category, the notion of provability does not appear in the definition of objects, which only depend on the signature, but they appear in the definition of morphism. And these are formulas which are provably functional in the theory T. Okay, on the other, of course, we have to verify that this is indeed a category that this category is geometric, and that we can define a model in this category, which is universal, which means which represents all models in all geometric categories. And this we shall do tomorrow, and we shall continue with the study of classifying opposites. So for today, I'll stop here.
The purpose of these lectures will be to present the theory of classifying toposes of geometric theories. This theory was developped in the 1970's by Lawvere, Makkai, Reyes, Joyal and other catagory theorists, systematising some constructions of Grothendieck and his student Monique Hakim, but it still deserves to be much better known that it actually is. The last part of the lectures will present new developpments due to Olivia Caramello which, based on her principle of "toposes as bridges", make the theory of classifying toposes more applicable to concrete mathematical situations : in particular, the equivalence between geometric provability and computing on Grothendieck topologies, and general criteria for a theory to be of presheaf type.
10.5446/55535 (DOI)
So it's my pleasure to reintroduce Laurent LeFargue, who will continue his course on classifying toposities of geometric theories. Please, Laurent. Okay, thank you very much. So I just want to remind the participants that yesterday I introduced the statement of the existence theorem of classifying toposities. How to associate to any first-order geometric theory, a topos which represents the function of models of this theory. So the first part of the lecture was devoted to the meaning of the theorem, of everything which appears in the theorem. And the second part, which we had begun, was devoted to sketching the proof of this theorem, which means to construct from a given theory T category, the so-called syntactic category, and owed with a topology, GT, JT, the so-called syntactic topology, so that the associated topos answers the question raised in the theorem. So we had begun the second part. So saying first that we wanted to construct such syntactic categories together with the topology, JT, and it should be endowed with canonical model, MT, which should be universal with respect to the construction of all models in all geometric categories. Okay, so then here we had introduced the notion of geometric category, which is a type of category where geometric formulas in any signature can always be interpreted. And there is not only a notion of geometric category, but also a notion of geometric function. A geometric function is a function between geometric categories, which respects the interpretations of geometric formulas in sigma structures. Okay, as a statement of the characterizing property of syntactic categories, endowed with their canonical model, the fact that these categories, that the syntactic category represents the function of models on geometric categories related by geometric formulas. And then we had proposed the construction of the syntactic category. So by definition, the formula, the object of this category are formulas, or geometric formulas in some variable. And these formulas are considered up to substitution of variables. Then there is a definition of morphism. So morphism are geometric formulas, which are probably functional. And in fact, yesterday, I had forgotten to say that these probably functional geometric formulas should be considered up to three provable equivalents. In fact, when participants made me remark that I had forgotten to say that. So here I have, it is added in the last line of this slide. Okay, so of course, this notion of equivalence, provable equivalence refers to what it means to be provable in the theory T, just as the notion of provably functional formulas. So this heavily depends on the theory T you are considering, not only on the signature. And then there is a formula for the composite of two morphisms. And of course, it has to be verified that this definition defines a category and this category is a geometric category. Okay, as this definition refers to the notion of T probability, we need it to make precise what it means to be provable. So to be provable in a theory means that it can be deduced from the axioms of the theory using the inference rule of geometric logic. And we are, we had given a full list of the inference rule, which can be used. And of course, this list is just the list of deductions rules, which we use all the time when we do mathematics, even without thinking about that. Okay, and by now, as we have this notion of provability, there is an induced notion, which is a notion about the relationship between different theories. So let's consider two theories, T and T prime, with the same signature sigma, so the same language, but different, but theories considered with different axioms. And then there are elements of vocabulary we introduced. So first, we say that the theory T prime is a cushion of the theory T, if any geometric sequence in the signature, so sequence relating to formulas, phi or psi, so any such sequence, which is provable in T prime is also provable in T. So you understand that T prime is a cushion of T, when there are everything which is provable in T prime, excuse me, it is the opposite here, I have made a mistake. I wanted to say that T prime is a cushion of T, when anything which is provable in T is also provable in T prime. So this means that in T prime, there are more provable properties than in T. So for instance, if you think about the theory of rings and the theory of commutative rings, so of course, the theory of commutative rings has more provable properties than the theory of rings, so it means it is a cushion. And we say that two theories are equivalent, if anything which is provable in one of these theories is also provable in the other, so each one is a cushion of the other. Okay, then there are obvious remarks to be made, if two theories are equivalent, of course, the associated syntactic categories are the same. And if T prime is a cushion of T, which means if there are more properties, more sequence provable in T prime than in T, then Ct is subcategory of Ct prime, and they have the same objects, because the objects only depend on the signature, as we have already remarked. Okay, so this will be important for later. By now, what we immediately remark is that if two theories are equivalent, the associated syntactic categories are exactly the same. Okay, so by now it is important to consider sub-objects in syntactic categories. So the notion of sub-object makes sense in any category, so in particular, we may wonder what is a sub-object of an object in the syntactic category. So let's consider a geometric theory T, and its associated syntactic category Ct. And let's consider an object of Ct, which by definition is a geometric formula, phi of x in some variables, in some context x, and of course, this is considered up to substitution of variables. Then the statement of the proposition tells us that the sub-objects of such a formula are exactly the formulas phi 1 in the same context, in the same familiar variables, such that the second phi 1 implies phi is provable in the theory T. Okay, so we see that in other words, in this category, the notion of sub-object exactly corresponds to the notion of provability between geometric formulas written in the same variables. Okay, so this is a categorical translation of the notion of provability. And by now, if we have two sub-objects of phi, phi 1 and phi 2, then the inclusion relation of one of these sub-objects in the other corresponds exactly to the relation of provability between the formulas. Okay, so the second part, of course, is the consequence of the first part because a sub-object is included in some other sub-objects b if and only if a is a sub-object of b. Okay, so this is very important to have such a translation between the categorical notion of sub-object and the logical notion of provability. Okay, in fact, this whole theory of syntactic categories is a way to make logic categorical. And when I say logic, once again, you should not understand this word as a particular part of mathematics, but rather as a way to present any mathematical theory in the linguistic way, which we always do when we do mathematics. Okay, so then we have to make precise what is the canonical model empty in such a syntactic category? How is it defined? So once again, let's consider such a geometric theory in some signature sigma together with the associated syntactic category. And of course, we suppose we are verified that CT is a geometric category. So this, of course, requires a proof. We don't have here enough time to give the proof, but it could be checked. And if you don't know the proof, it is a good, very good exercise to check it that it is really a geometric category, in particular, it has arbitrary finite limits and so on. Okay, so as it is geometric, as this category is geometric, it has a meaning to talk about models of the theory in such a category. And then we are to introduce a particular model, which we shall call the canonical model. So a model consists in associating an object to any sort of a morphism to any function symbol f on the sub-object to any relation symbol r. Okay, so let's do it. So first, for any sort of a, we decide that the associated object is just the formula R, who in any variable xa associated to the sort of a, of course, it does not depend on the choice of variable, because objects are formulas considered up to substitution of variables. Then if we consider a function symbol f going from a list of sorts a1, an, to a sort of b, then by definition, the associated morphism in the model we are constructing is given by the formula, which is written there. So it is just, the formula is just that the variable xb in the codomain is equal to f of the variables x1, xn in the domain. Okay, so this is just the formula we are used to when we want to define a function. But here by definition, such a formula is a morphism. Here we just have to check that this formula is probably functional, which of course is a completely abuse. And lastly, for any relation symbol R, in the context of a family of sorts a1, an, we decide that the associated sub-object of true in the variables x1, xn is just the formula R in x1, xn. Okay, so R in x1, xn is of course a sub-formula of true of x1, xn. And here we are using the fact that in the syntactic category, the formula true in the variables x1, xn is a product of the objects represented by the formulas true in x1, true in x2, true in xn. Okay, so this of course defined a sigma structure. And here there is something to be verified, which is the fact that this is a model of the theory T. So in order to do that, we have to consider the interpretations of geometric formulas in this model. So let's consider such a geometric formula phi in some variables x. And by now we have a lemma that the interpretation of the model of such a formula phi of x is just the sub-object defined by this formula. So just remember here we had said that the sub-object of formulas are consistent formulas in the same variables, which probably imply the ambient formula. Okay, so here the ambient formula, the context is the formula true in the variables x1, xn. And it contains as the sub-objects formula phi x. And so the statement of the lemma is that this formula phi of x, considered as a sub-object of the formula true in x, is exactly the interpretation of the formula phi of x in our canonical model empty, in our canonical sigma structure empty. And if we combine this lemma with the previous proposition here on the interpretation of sub-in on the relationship between provability and inclusion of sub-objects, we get as a corollary that a sequence is a geometric sequence relating to geometric formulas phi and psi is provable, is provable in the theory t. If and only if considered as a sub-object of the object true in x, the first one is included in the second one. And this by definition of a model means that such a model, it means that the sigma structure empty verifies this, verifies the second phi implies psi. So you see that here we have an equivalence. We are saying that a sequence is provable in t, if and only if it is verified by the sigma structure empty. So an equivalence, of course, means we have an implication in two directions. So in the first direction, we have that if a sequence is provable, in particular if it is an axial, then it is verified by the sigma structure empty. So this means that empty is a model. But here we see that it also works in the other direction. If the canonical model empty verifies this sequence, then verifies the sequence. It means that this sequence is provable in the theory t. Okay. Okay. And by now, we have a theorem, which is, so this theorem exactly tells us that the category we have constructed is an answer to the question we addressed. So let's consider the syntactic category just constructed of geometric theory t together with its canonical model empty. So then as for any model, there is for any geometric category c, from the category of geometric functions from city to c, to the category of t models in c. And this function associates to any function from city to c, the transform of the model empty by the function f. Okay. And the statement of the theorem is that for any such geometric category c, this function is an equivalence. So the category of geometric functions from city to c is equivalence through this function to the category of models of c. And not only it is an equivalence, but we can also construct the reverse equivalence. So the reverse equivalence has to associate to any model of the theory t in c, a geometric function from the syntactic category city to c. So I will is defined this reverse equivalence. So we have to associate to any object of city and object of c. And we have to associate to any morphism of city, morphism of c. So what is an object of city? By definition, it is a formula. It is a geometric formula up to substitution of variables. And what we do is just to consider its interpretation in the model m. See, and this by definition will be the image of this formula considered as an object of city by the function fm. So the image of such a formula considered as an object of city by the function fm is just defined as the interpretation of the formula phi of x in the model m. And by now, let's consider a morphism of the syntactic category. So it is a provably functional formula theta in two families of variables x and y. And this is such a provably functional formula up to three provable equivalence. And so what you have to do is abuse. You consider the interpretation of this formula. So the interpretation of this formula is a sub object in the product of the interpretations of the formula phi of x on psi of y. But because the formula theta is provably functional, you get that this sub object in the product is the graph of a morphism. And so you define the image of this morphism theta by the function fm to be the unique morphism whose graph is the interpretation of theta. Everything has been defined so that all of this makes sense. Okay, and so the theorem tells you that these are two equivalences which are in this one to the other. Okay, so of course it has to be verified, but the verification is in fact is quite easy. It is more intricate to verify that the category ct is geometric. Once you know this, everything goes through very easy. Okay. By now, we have already constructed the syntactic category together with its canonical model. And we know that the syntactic category represents the function of models on geometric categories. But by now, we remember that what we want in fact is not a geometric category. We want to top us, which is a much more geometric object. And in order to get a top us, we need to define topology on the syntactic category. So we need to define when a sieve on an object in the syntactic category will be considered as a covering. And here is a definition we propose. We decide that the sieve on an object's psi of y is a covering if it contains a family of morphism, you see, from some domain's phi i to the codomain psi. And this morphism are defined by probably functional formulas, theta i. And so we decide that such a sieve has to contain a family, such a family of morphism, which is globally epimorphic. So this means, so globally epimorphic is once again a categorical notion. But because images and unions of sub-objects are well-defined in the geometric category city, the property to be globally epimorphic just means that the union of the images of this morphism is equal to everything, is equal to psi. And so this is an inclusion property. And the inclusion property, I remind you, is equivalent to property of t probability. So in fact, it means exactly that the following sequence psi implies the union over the indices i of the images, the images are just given by the existential quantifier on the variable x i. So this sequence has to be t provable in the theory. So this exactly means, so this is a logical translation, the translation in the language of probability of the categorical property for a family of morphism to be t provable. And so we decide to define in this way the notion of covering sieve. And of course, it can be checked that this is indeed a grotendic topology, which means it verifies the three grotendic axioms of maximality, stability and transitivity. So maximality is obvious. Stability comes from the fact that both arbitrary unions and images, which means existential factors, commute with best change. So for unions, it is a so-called distributivity rule. And for the existential quantifiers, it is the so-called Frobenius rule. Okay, so this means these are parts of the usual inference rules, deduction rules of the object. A remark we can make here, an important remark, is the fact that this topology is defined by the categorical structure of the syntactic category. You see, a family of morphism is covering when it is globally epimorphic. So this property is purely categorical. So it means in particular that if you have two syntactic categories, which are equivalent, then they have the same topology. Okay, because the topology is deduced from the categorical structure in that case. Okay, so let's just see where we are. So we have constructed a syntactic category and we have proved that it is universal with respect to models of the theory T in geometric categories. Okay, but by now we want to move to a universality property with respect to morphism, to toposis, and to morphism of toposis. So this means we have to move from geometric factors between geometric categories to morphism of toposis between toposis. And these two things are different. And the relationship between the way to go from geometric morphism to morphism of toposis is given to us by Diakones-Koul's equivalence, which was already stated yesterday afternoon by Olivia. So here it is. So I repeat it again, just in the case when the category we are considering has finite limits, which of course is a simple case. In fact, yesterday Olivia stated Diakones-Koul's equivalence in general, but for today we only need Diakones-Koul's equivalence in that particular case. So let's consider such a small category with finite limits together with topology. And let's consider the canonical function from the category C to the associated topos. So the canonical function L is just the composite of the Yoneda embedding followed by the Schiffification function, J upper star. And let's consider an arbitrary topos. Then here is a statement of Diakones-Koul's equivalence. First, for any morphism of toposis from E, from this arbitrary topos E to the topos defined by the site C, J, then the composite of the canonical function from C to the topos C at J with the prime back component F upper star of the topos morphism from E to C at J. So this component verifies that this composed function verifies the two following properties. So first it is a flat function between C and C that it respects finite limits. So the notion of flat function exists in a more general setting without any hypothesis on C, as Olivia explained in her lectures yesterday afternoon. But in that particular case where C has arbitrary finite limits, it just means that this function from C to E respects finite limits. And the second property verifies is the fact it is J continuous. So this means that it transforms any J covering family of C into a family of morphism of E, which is globally epimorphic. Okay, so this is the first part of the statement. So the second part of the statement is the fact that the factor we have just defined from the category of morphism of toposis from E to a orthopos is equivalent, it is an equivalent to the category of factors from C to E which are both flat and J continuous. So here I remind you flat just means that it respects finite limits. And J continuous means it transforms J covering into globally epimorphic families. Okay, so this is a general statement of the Yakones cruise equivalence. And by now we have to apply it to syntactic categories, but for this we need a lemma. So let's consider geometric theory, the associated syntactic category on an arbitrary topos E. Let's consider a functor from C to E. And then the statement is that such a functor is geometric. If and only if it is flat, which means it respects finite limits, and it is JT continuous, which means it transforms globally epimorphic families of C to E. I remind you that the topology of C to the syntactic topology of the syntactic category was defined by deciding that a family of morphism is covering when it is globally epimorphic. So here JT continuity just means that such a functor has to transform globally epimorphic families into globally epimorphic families. Okay, on the earth lemma, which is easy, tells us that such a functor from the syntactic category to E is geometric. If and only if it is flat and JT continues, but according to Diaconese-Cruz equivalence, it just means that such a functor defines morphism of topos E in the reverse direction. Okay. And so we get the following corollary for any geometric theory T. And if we denote by E, T, the Cushon topos of the topos of pre-shifts on C, T by the topology JT. So we consider this topos and an arbitrary topos E. And then we have a composite functor, which goes from the category of morphism of topos E to E, T to the category of flat JT continuous functors from C, T to E. Okay, so this first functor is Diaconese-Cruz equivalence. So it is an equivalence. But by now we just said by the previous lemma that a flat JT continuous functor is the same thing as a geometric functor from C, T to E. And then we have already proved that the category of geometric functors from C, T to E is equivalent to the category of models of E. So here we take the composite of two equivalences of categories. So it is an equivalence of categories. And we have proved that our topos is a classifying topos for the theory, for any topos E to consider a topos morphism from E to E, T is the same thing as to consider a model of the theory T in the topos in particular, if you take for E the topos of sets, you get that a point of the topos of the classifying topos E, T is the same thing as a model of T in as a center-ethic model of T. Okay, so we are of course, we have as for any topos defined by your site, of course, we have a canonical functor from the syntactic category to the topos E, T. But yesterday, Olivia introduced in her lecture a national property which can be verified by these canonical functors. It is a property to be a fully-faced, fully-embedded. This property is not always verified, but it is verified by some sites. And here, so the definition is the following. You say that topology on a category C is sub-canonical if the canonical functor from C to the associative topos, to the quotient of both by J. Okay, so if this canonical functor is fully-faithful. Okay, so it is a definition. And in fact, you can prove that this is equivalent to say that the unedited embedding of C into C hat factorizes through the topos considered as a subcategory. In other words, it means that any representable functor on C is a shift for the topology J. Okay, so it is a definition, a general definition, which was given yesterday or in Olivia's lectures. And then we have a lemma, which is the fact that the canonical topology, the topology G T on the syntactic category C T is always sub-canonical. Whatever the theory T. And so this means that C T is embedded into E T as a fully-faithful category. And of course, this fully-faithful embedding also respects finite limits. So this is always the case because the unedited embedding respects arbitrary limits. And the shiftification functor respects finite limits. So the composites, the canonical functor always respects finite limits. And so this functor respects finite limits, in particular it respects sub-objects, and it is fully-faithful. And as a corollary of this lemma and what we already proved, we get the following that a geometric sequence of the theory T, so an implication between two geometric formulas, phi and psi, is provable in the theory T if and only if it is verified by the universal model of T. So the universal model of T is the image of the canonical model in C T by the canonical functor L from C T to E T. So here we really have, you see, a wonderful relationship between syntax and semantics. So we have that any property is provable in the theory if and only if it is verified by the canonical model of the theory in its classifying topos. Okay, and I already mentioned as a site remark that if you apply that to so-called coherent theories, so here it corresponds exactly to theories whose classifying topos is coherent in the sense of topos theory as already introduced in FGA4, then what we get is that Godel's completeness theorem is the same thing as the theorem of the lemma on coherent topos is having enough points. So this is a really wonderful fact because it tells us that logical, most important theorem of logic is in fact the same thing as this purely geometric theorem of the lemma. Okay, so by now an important remark we have made is it is a reverse direction which is the fact that if we start with an arbitrary topos, then there are infinitely many geometric theories whose classifying topos is equivalent to the given topos. And here I just write a sketch or another idea of the proof of this proposition. So you start from an arbitrary topos is so by definition a topos is a category which is equivalent to the category of sheaths on some site, cj. Okay, so by definition a topos is a category equivalent to some cj, cj hat. Okay, where c is a small category and we can even suppose that c has an arbitrary finite limit and j is a topology on c. And by now, of course, there is a notion of a flat g continuous factor from c to an arbitrary topos. And this notion is of course defines a theory, there is a theory of a flat g continuous factors. And you can check that this theory is in fact a geometric theory. And then when you consider the geometric theory, so you see the geometric theory of flat g continuous factors. In other words, you can say that this geometric theory is really the theory of points on the topos associated to cj. According to the interconnects with lemma, this theory is the theory of points of the associated topos. So it is a geometric theory. And then you prove that its classifying topos is equivalent to the topos of sheaths on cj you started with. So here you see that any presentation of the topos in terms of a site gives rise to a presentation of the topos as a classifying topos of some geometric theory. So here you already see that there is an incredible diversity of presentations of topos by geometric theories. So the fact for here there is an induced definition, you say that two geometric theories are so-called morita equivalents when they have the same equivalent toposes. So maybe it could be suggested to call this equivalent semantic equivalents. In fact, we have already, there is also a notion of syntactic equivalents. It was mentioned yesterday when I answered the question, you say that two theories are syntactically equivalent when their syntactic categories are equivalent. But here there is another notion of equivalents, which is induced by the abuse one, but which is much more rich. It is a notion of semantic equivalents or morita equivalents. It means that two theories are the same associated topos. And it really means that two theories, which possibly are phrase in completely different languages, have the same mathematical content. Okay. And so it gives rise to, it is the beginning of a theory of, I would say of relationship between the contents. It is a theory of relations between the mathematical contents of different mathematical theories. And this theory, in fact, was introduced and developed by Olivia from her PhD thesis. And she has called that the theory of topos theoretic bridges. And she already talked about it yesterday. And today I will introduce some consequences of some consequences, some applications, some bridges, and some consequences of some concrete bridges. So here is the definition. Two theories are called morita equivalents, if they're associated, the classifying topos are equivalent. And then as a remark, we have the fact that the syntactic equivalents between two theories, which means the existence of an equivalent of syntactic categories, implies an equivalence between the associated topos. This is because the topologies, the syntactic topologies and syntactic categories are induced by the categorical structure. And here a very important remark is that the converse is not true. Some theories are semantically equivalent, or morita equivalent, without being syntactically equivalent. So the relationship of semantic equivalence is much more subtle. Okay. And by now, what I want to do in the third and last part of the lecture is to present some basic results, which allow to get some first applications of the theory of classifying topos. So these results I am to present are from the PhD thesis of Olivia, under book theories, the sites of topos. Okay. So a general principle for getting these results will be to use bridges. So a bridge, once again, is an equivalence between two toposes presented in two different ways. And the general principle of bridges, of toposteritic bridges, is to consider invariance of toposes, which means some information which can be extracted, or information, or structures, or whatever type of properties of geometry or mathematical objects, which can be constructed from toposes using the language of categories, and which are invariant under equivalences of categories. And when we consider such an invariant, general principle is to try to express this invariance in terms of presentations of the toposes under consideration. Here the first invariant we are to consider is the category, in fact, the ordered set of sub-toposes of given topos. So the way any topos, it is possible to associate an ordered set, a set with an ordering consisting of its sub-toposes. And then, we may wonder how this invariant can be expressed in terms of different presentations of the toposes we consider. So first we need a definition. In fact, the first part of the definition was already introduced. So what is an immeasuring of toposes? It is a morphism of toposes consisting in a pair of adjoint factors, j upper star, j lower star, such that the right adjoint component, j lower star, is fully phase-full. And this is equivalent as these these factors are adjoint. It is equivalent to requiring that the composite factor, j upper star composed with j lower star, so this composite factor, identified with the identity factor of E prime. So you see, it really corresponds to E prime being a sub-t rarely embedded in E. And the pullback component from E to E prime has to act on objects of E prime without changing them. So in some sense, they are fixed points of this factor. So this is the definition of an embedding. And then a sub-topos is an equivalence class of embeddings. You decide that two toposes, two embeddings of toposes E1 and E2 into an ambient topos are equivalent if they can be related by an equivalence of categories, which transform the first embedding into the second embedding up to isomorphism of factors. Okay, so this is the notion of sub-topos. And then there is a first result, which in fact, it is already in SGA form. So it is the fact that if we consider a sub-topos of a topos E, which is presented as a topos of sheaves on some site, then to consider a sub-topos of this topos is the same thing as to consider another topology, J prime on the same category C, which is bigger than the given topology, J. So of course, in one direction, it is abused. If you have on C topology J prime, which is bigger than J, of course, it induces, it defines a sub-topos C at G prime of C at and the embedding of C at J prime into C at factorizes through the sub-topos C at J, because J prime contains J. So this means that any topology G prime bigger than J induces a sub-topos of our topos E equal C at J. But the proposition is that this map is one to one. So to consider a sub-topos is the same thing as to consider a topology. And here, as we can take for C, a small category, it means that as a consequence, it implies that the sub-toposis of a given topos E together with inclusion relation of sub-toposis is a set. It is a set ordered by inclusion. Okay, so it is a corollary of this proposition. And by now, we can wonder about the translation of this property of this invariant, the invariant of consisting in the ordered set of sub-toposis in terms of theories. So by now, we consider a geometric theory of signature sigma. And we can see, we suppose that we are interested in sub-toposis of the classifying topos of T, in sub-toposis of E.T. And here is a statement. So first, start from a theory T, which is a quotient T prime of T. So this means it has the same signature, the same language, and it has more provable sequence. There are more properties, there are more axioms in T prime than in T. And then the first part of the statement is that in that case, E.T. prime, the classifying topos of T prime, identity is a sub-topos of the classifying topos of T. And the second part of the statement is that this map, which goes from quotient theories T prime of T, considered of course up to equivalence, the map from these quotient theories to the ordered set of sub-toposis of E.T. So this map is one to one. So to consider a quotient theory of T is the same thing as to consider a sub-topos of E.T. So this means that this invariant, consisting in the ordered set of sub-toposis of a given topos, has an extremely nice expression, both on the sides of topologies and on the side of theories. Okay, so maybe we can sketch or at least give an idea of the proof. So on one side we have the expressions, and on the other side we have the topologies. So what we need to do is to define a map in one direction and then in the other direction. So start from quotient T prime of T and consider an axiom of T prime. So it is a second phi implied psi. And of course this sequence defines a monomorphism in the category C.T. So the monomorphism is just a corresponds to the object phi conjuncted with psi on the object phi. Of course phi on the psi is a sub-object of phi in the syntactic category. And then you decide that this monomorphism has to be a covering for the topology J associated to T prime. And you define J as the smallest topology of C.T. which contains a canonical topology J.T. on these coverings. Okay, so it is a topology generated by G.T. and by this family of coverings associated with all axioms of T prime. And in the other direction when we have a topology J on C.T. which contains G.T. We want to associate to it a quotient T prime of T. So in order to do that what you do is to consider an arbitrary J covering family of morphism of C.T. So such a family of morphism of course is a family of T-provable functionals. Okay. And you can consider of course the images of this morphism. So this corresponds to taking an existential quantification. And then we consider the union of the images which correspond to the disjunction side, the sign we have written there. And by now what we just introduced this second, phi implies this property. And what we really want is this family of morphism to become a covering. And so excuse me, it is a covering. So we want this family of morphism is a covering. And we want to translate that in the language of provability. So we decide that any segment of this form gotten from a covering has to be an axiom of T prime. And we define T prime as the quotient of T defined by this family of axioms. And so the theorem is that these two maps are inverse one to the other. So you see that it is constructive in both directions. Okay. And before we make a break, I want to state the following consequence, which is the equivalence between the language of provability between the notion of provability and the notion of grotendic topology. So here we combine the two translations of this invariance that had been computed on the one hand in terms of topologies and on the other hand in terms of theories. So consider a geometric theory and suppose it's classifying topos, ET is presented as a topos of sheaves on some site. So this site may be the syntactic site, but it also may be any site, okay, any representation which you know. And then there is a correspondence, a one to one correspondence between on the one hand, the theories T prime, which are quotient of T, and on the other hand, the topologies, the topologies J prime and C, which contain J. So here this is, as Olivia says here, we have a bridge. So the invariance is the invariance consisting in the sub-toposis of the considered topos. And we are considering two representations of this topos at the same time on the one hand in terms of a theory T, and on the other hand in terms of a site CJ. And we express the same invariance on the two sides. And in this way, we get a correspondence. And so by now this corollary has very striking consequences. So the first one is written there. Of course, when you have a theory given by your family of axioms, the first thing you want to know about this theory is whether it is contradictory or not, whether it is contradictory or consistent. And in general, it is a very subtle and difficult question to answer. To answer that is, in general, it is very difficult. But here, the consequence of this corollary is that when the classifying topos of ET is presented from a site CJ, and J prime is topology corresponding to T prime, then T prime is contradictory. If only if the topology J prime is maximal. And maximal means that the NTC is covering for any object. So this is a geometric translation of the property to be contradictory. And so these things that if in the category C there is at least one object which is not covered by the NTC, then the category T prime is not contradictory. And here there is the last slide before the break, which is also I think very, very interesting. So it is just an obvious consequence of the previous result. So it is the following. Consider an arbitrary geometric theory of signature sigma. And then consider an arbitrary geometric sequence. Of course, you want to know, you are a mathematician, so you want to know whether this geometric sequence is provable in the theory. Is it a consequence of the axioms or not? So in mathematics, usually you want to prove results. The result is always an implication which has to be a consequence of the axioms of the theory. So you want to prove that you are a mathematician. This is exactly what you want to do. But then formally, you can do the following. You just introduce T prime, the question theory of T, which is defined by adding this axiom. And so the question is, is T prime equivalent to T or not? T prime equivalent to T just means that this sequence is a consequence of the axioms of T. It is provable in T. So on the final, let's consider the classifying topos of T, so ET, written, we suppose, as the topos of shifts on some sites. And let's consider the topology, J prime on J, which is defined by ET prime. In fact, we know how this topology is defined. We just consider the sequence phi implies psi. We consider the associated monomorphism, phi on psi implies phi. And J prime is just the topology generated by J and by this morphism considered as a covering. So the monomorphism from phi on psi implies psi. So this means J prime is constructed by just generation from J by one covering. And by now, we have the following equivalences. We have that our sequence is provable in the theory. If and only if the caution theory T prime of T is in fact equivalent to T, and this is equivalent to J prime equals G. So in other words, is some topology generated in some way? Is it equal to G? Or is it bigger? This is the question. So this is really wonderful because it means that all the inference rules, you remember, there were maybe 10 or 12 inference rules, which were the deduction rules of geometric logic. And the others are just replaced by the actions of growth and dectopology. And the other menus are three actions, the maximality action, which is trivial, and then the stability and transitivity actions. So in fact, the axiom of growth and dectopology here appear as completely equivalent to the deduction rules of geometric logic. And any problem of probability in a geometric theory can be translated in a problem of knowing whether some family of morphism in a category belongs to the topology generated by a family of covariance. Okay, so of course, it doesn't make the problem trivial, but it is a translation of one part of mathematics. It is really here in some sense an embedding of logic into topology. Okay, so here, in fact, I think something very, which has to be done is to try to implement that in computer systems. And I really think here there is an important subject to be explored. Okay, so I stop there for the first part, we make a break. Okay, thank you.
The purpose of these lectures will be to present the theory of classifying toposes of geometric theories. This theory was developped in the 1970's by Lawvere, Makkai, Reyes, Joyal and other catagory theorists, systematising some constructions of Grothendieck and his student Monique Hakim, but it still deserves to be much better known that it actually is. The last part of the lectures will present new developpments due to Olivia Caramello which, based on her principle of "toposes as bridges", make the theory of classifying toposes more applicable to concrete mathematical situations : in particular, the equivalence between geometric provability and computing on Grothendieck topologies, and general criteria for a theory to be of presheaf type.
10.5446/55536 (DOI)
Luca has given several different contributions to ship theory. Today we will talk about ships on tithopology. I will start with a brief introduction and then I will recall some basic notion that Obolivia also recalls in lecture about ships on topological space. And then I will restrict ships on tithopology. The idea is that in this setting we can consider more objects and we can study more objects and work with them. And after that I will talk about a couple of applications on algebraic analysis. So we will see how, thanks to these new objects, we can construct enlarging a little bit of tithopology. We can construct some analytical objects which are very important. And then also I will talk about some application to differential operation operators. So I will say that I will explain how enlarging a little bit the category of analytical object we can distinguish different kinds of differential operators. We cannot distinguish with classical ship theory. Okay, as I said, a ship theory is used in several mathematical contexts and has an application in various fields of mathematics. In the same context as semi-algebraic geometry, sub-anatic geometry and a minimal geometry. And each of these theories has been very useful in various fields of mathematics. But each of these theories has some elements which was missing in the other ones. For example, in sub-analytic ship theory we didn't have the notion of fibers, of stalking fibers. In a minimal geometry we didn't know about the operation or didn't know about how to construct cyclic objects. And the idea was to find something to generalize these various constructions. And so media, which was introduced by Kashivar and Shapira, but I think it's maybe known by other mathematicians as well, was to consider less open and less coverings. So a very easy notion of sight and extend the classical machinery of ships to this context. And in this way we can give a first fine description of the various constructions of the sub-analytic, semi-algebraic and a minimal one, using the notion of t-topology. So I recall briefly what is a ship. Again, I'm interested in ships of back-to-air spaces. So we consider a topological space and a field. The ship is a correspondence, is a data, a correspondence from the open subsets to the category of back-to-air spaces, to which open the associated vector spaces and to each characteristic expression, to each inclusion we associate a morphism, which is the restriction of morphism, which is compatible with the inclusions of open subsets. And that means that it is a contravariant function from the category of open subsets over topological space, where the arrows are the inclusions, to the category of vector spaces. So given this contravariant function, we can define a ship, which is a pre-ship satisfying some gluing condition, which is represented by this exact sequence. That means that if we have a section of an open subset, which is zero on a covering, then this section is zero, and if we have a family of sections on a covering of an open subset which glue on the intersection, then we can find a global section extending all of them. A classical example are continuous functions, because all we know that if a continuous function is zero on a covering, then it is zero, and then we have a gluing condition on coverings. So this correspondence defined this is a basic example of a ship on topological space. There are objects which are interesting, but do not define ships on a usual topology. And this example is this one, for example, bounded continuous functions. And for example, on the real line, we can find infinite coverings and bounded functions, which do to a function which is not bounded. For example, we can consider the covering of R even by the open subset minus nn. Of course, each continuous function is bounded of minus nn, but it is not a bounded or not every continuous function is bounded on the real line, for example, x to the x square. So this correspondence is not a ship if we consider the usual topology. We have to change something. And on topological space, we have the notion of stocks, which is represented by this inductive limit on the section. The stock on the point is given by an inductive limit or the section on the negros of the point. And the elements of the stock are equivalence classes. So an element of the stock is represented by a section on a negros of the point. And two sections are equivalent at the level of stocks if they coincide on a negros of the point x. Stokes are important because thanks to them, we can check if two shifts are isomorphic. We can check if sequences are exact. And we do everything on stocks. And the definition of a topological space, if we look at it, only depends on the notion of open subset and the notion of co-opers. And we can slightly generalize this notion by choosing only a sub-family of open subset, t, and for each element of t, a sub-family of co-opers satisfying some suitable hypothesis. And in this way, we can define a site, very easy type, a very easy example of site. Because we just take less open subset and less co-opers. Arrow size are always the inclusion. So we only have one arrow between two objects of t. And in this case, we can define pre-shifts as contraband and phantor from this new family t to the contemporary vector spaces. And we say that a pre-shift is a shift if it satisfies the gluing condition only on this restricted family of open subsets and co-opers. So we still have the exact sequence of the gluing condition, but it is restricted to a smaller family of open subsets and a smaller family of co-opers. Why we choose less open subsets and less co-opers? Because we can consider more objects. For example, we have seen the example of bounded continuous functions. And if we take topological space, we consider all the open subsets. But now we consider only finite co-opers or better co-opers which admit the finite refinement in order to attain the site. And we consider the correspondence we have seen before, u to continuous bounded function of u on u. In this case, if we consider finite co-opers, we can glue the bounded function to a continuous function. But we can also take the max of the bounds. So we obtain a bounded function with glue all the continuous function on the finite co-opers. So if we consider less co-opers, we have one more element which becomes a shift. In this case, continuous bounded function becomes a shift. So with this idea in mind, we try to extend the notion of shifts on subanalytic or minimal and semi-algebraic and other kind of slides. So we consider topological space. And on this space, we consider a family of open subsets which is stable under union and intersections such that a difference of two elements has a finite number of connected components. And such that it is a basis for the topology of the topological space X. And starting from this, we define the site XT where the elements are just the elements of this family of open subsets. And the coverings are the topological coverings admitting a finite refinement. So I wrote finite coverings in this sense. So it is important to have finite covers. Examples of this family T are, for example, the family of semi-linear subsets in RN, so the open subsets which are defined by linear equations. The family of semi-algebraic subsets, so RN, so the subsets which are locally defined by algebraic inequalities. And the family of relatively compact subanalytic subsets of a real analytic manifold which is the subanalytic site XSA. In this example, I will use later for the applications. And more generally, if we have a minimal structure, the family of open definable subsets of this minimal structure. All these sets are stable under union and intersection and have this nice property of having a finite number of connected couplings. So if we fix this family T of this property, we have, there is a nice category inside the category of classical shifts, which is the category of coherent objects, which is the category of shifts such that there is an equilibrium from a finite sum of objects of T. And such that for each coefficient, as a finite sum of constant shifts on T, the kernel also satisfies this property one. So there are shifts which have a finite resolution on this constant object U. This is a full balance of the subcategory of the category of shifts on X. And if we consider the natural functor of sites, which is obtained by the inclusion of the family T, the family of the open subset of X, we have a functor of direct and inverse image from the category of classical shifts, topological shifts to the category of shifts on the site XT. We have direct and inverse images related to them. And the direct image is also a fully faithful inclusion, exact inclusion of the category of coherent shifts in the category of shifts on the site XT. So we can really see this category of coherent shifts as a subcategory of the category of shifts on XT. And we have an additional description of this category of shifts on XT because it is equivalent to the category of ind coherent shifts on T. And that means that an element of shifts, the category of shifts on XT can be seen as an element of an ind object, as an ind coherent object. So a limit of a funtors from coherent to set. And this category is also isomorphic to the category of left K additive contravariant funtors from the coherent shifts to the category of vector spaces. So a shift on XT can be seen as a contravariant funtors from coherent to mod K. And thank you. There is a we also construct can construct in a very easy way the shifts on XT. So if we have a pre-shift, so if we have a correspondence from the family contravariant funtors from the family T of open subsets to the category of vector spaces, it is easy to check if it is a shift or not. We assume that the implicit is sent to 0. And we only have to check gluing conditions on couple of objects of this family T. So we have to check that the sequence of the gluing conditions is exact only on couples of open subsets. If the second is exact, we can say that F is a shift on XT. For example, on bounding function, we just have to check that two bounding functions glue on a couple of open subsets. Which is quite straightforward because we take the match of the two bounds. So in the case of shifts of T topology, we don't have the notion of thoughts because we are in a grating topologies. So we cannot see if two, for example, if two shifts are isomorphic, only checking fibers on the points of the topological space X. Because even if this family T form a basis, so if this limit is actively the limit on the neighbor of the point X, the stocks on the points of the topological space are not enough to check if two shifts are isomorphic or not. Because, for example, if we consider continuous and bounded continuous function, we cannot distinguish continuous functions only checking or on stocks. Because on stocks, any continuous function is also bounded. We consider the limit of neighbors on a point because any neighbor contains a component. So if one wants to consider only stocks on stocks on classical points, we lose information about the shift. We need more points. And these more points, we take these more points considering the spectral topology associated to this family. So we consider the prime filters of elements of T. So filters such that the union, if the union of two objects is inside, one of the two is the filter. And if U is an element of T, we consider the U tilde of the filters such that U belongs to that. So in some sense, the filters containing the alpha contained in U. And a neighbor of a prime filter is, this sense is U tilde with the U tilde element of T contained in alpha. And we call the X tilde the associated topological space. And in this space X tilde, the open subsets U tilde are quasi-compact. So they admit every covering, admit the finite sub-cover. I give an example because when X is equal to R and T is the family of finite units of open intervals, we can classify the points of R tilde. We have five kinds of points. The first one is the filter of U containing X. And this corresponds to the real points of the topological space. Then we have the points which are represented more or less by left and right neighborhoods of a point X. So the point X plus and X minus. And we have to add the points at infinity. So if you want left and right neighborhoods of the infinites. So you see that we have much more points in the X tilde than in R, 30 line than in R. And thanks to these new points, we can distinguish effectively continuous function from continuous bounded function. Because for example, we can consider the continuous function 1 over X. Then it is not clear, it is not bounded on each open subset of the kind of zero epsilon. That is a continuous function on zero epsilon. So it is not bounded on each neighbor of the point zero plus. So it is not X minus 1. It does not belong to the stokes of bounded function on the point zero plus. But it will also distort continuous function on zero plus. So on the point zero plus, continuous functions and bounded functions are different. But they coincide on each point of the kind one. So on each if you want classical point of R. And if you want other example, for example, that function f of X equal to X, it is a continuous function, but it is not bounded at plus plus and minus infinities. Stokes at plus and minus infinity are different. But we have much more. We can prove much more. And sorry, X is a topological space, not a real 19 magnitude. The category of shaves on the side XT and the category on topological space X tilde are equivalent. The category of shaves are the same. Level of topos they are the same. So if we consider the category of shaves on XT and we want to work on stokes, we have to consider the topological space X tilde. And this is useful because sometimes to check something or to make some operation, we really want to consider fibers, stokes or fibers. And we have a good topological space making the job. Another important construction we can do on this site XT is the subcategory of T-flabby objects. T-flabby objects are shaves such that for each couple UV, which U containing V, the restriction of Malthism is a surjective. And this is very useful when we want to find the solutions of functors because this family is injective with respect to the functor on F with F coherent. And this family includes all the sections on opens. And with respect to direct images as well, of course once you have defined a morphism of topological spaces from X to Y, which is compatible with the family PX and TY on the topological spaces. So these T-flabby shifts are enough to derive and to find a solution which are acyclic, which are injected with respect to HOM and to the functor F-star associated to continuous function respecting family's T on two topological spaces. Okay, so thanks to this construction we can have a global approach, generalizing subanalytics, semi-agegric and dominium. And all these construction are useful in all these settings. But now I want to focus on the the subanalytic site and talk to you about some constructions we can make on this setting, which have been part of my research. So we consider the subanalytic site XSA, where as I told you before open subsets are relatively compact open subanalytic subsets. And coverings are covering or relatively compact subanalytic open subsets, admitting a finite sub cover. Okay, in this ambient we have a, the idea is that we have, we can consider more objects as the basic example of a bounded function, bounded functions. In this setting we can consider a, for example, tempered functions. So function, whichever polynomial growth at the boundary of an open subsets, sub set, it's infinity functions. Again, this is not a shifter with the usual topology, as in the case of bounded functions. So you see that when you want to consider a condition at the boundary of open subsets, functional spaces will depend from conditioning the boundary of an open subset. They usually not define a shift if you consider the usual topology. We lose this condition at the boundary. But again, in this case, no, first I give you an example of this factor. For example, if we consider the real line, we consider a, here to the x, to do one over x. And this function is a, is a tempered, it is even bounded on each open subset, un of the x between one of n and one, without natural. But if we consider the interval zero one, which is the union of this un, the function e to the one over x is still well defined, it's still a continuous function, but it is not tempered because it had around zero, near zero, it grows, it grows so fast. So the growth is not polynomial anymore. So it does not define a shift if we consider the r with the usual topology. But it is possible to show that if we have two open subsets, which are subalytic, and we consider the intersection, union and intersection, we consider a couple of open subalytic subsets, we have the exact sequence corresponding to the gluing conditions. So we have gluing conditions on couples of open subanalytic subsets. And as we have seen in the general case, if we check that a pre-shift is a shift on couples of open subset of t, in this case relatively compact open subanalytic subsets, we obtain a shift. So tempered functions are shifts, are a shift on the subalytic side. And moreover, in this setting, we can also define the sixth growth in the cooperation. We have all the types of product, direct image and inverse image, which are always well defined on the site. But for subanalytic shift, we also have a proper direct image and the one-carid one, the third one. And now let's talk about funtorial constructions. So we see how using this funtorial, is using funtorial operation, we can construct analytical objects, which are important. We consider a real analytical manifold, the dimension hander, and a family of submanifers. And we suppose that they intersect transversely. As a basic example, we consider C2 and the zi equal to 0, i equal to 1 or 2. And we need to construct the notion of normal deformation. When we have an analytical manifold, we can construct an analytical manifold, x tilde m1, and an application from this x tilde m1 to from x times r, satisfying its property. t minus 1 of x minus x1, m1 is isomorphic to x minus 1, m1 times r minus 0, t minus 1 of c is isomorphic to x for its corresponding difference from 0. And t minus 1 of 0 is isomorphic to the normal bundle of m1. We consider the restriction. Now of pm1 to t1 different from 0, and we define m tilde 2, as the inverse image of p bar over 1. And we perform again the normal deformation of x tilde m1 with respect to this m tilde 2. We obtain another map, and we define recursively the multinormal deformation along the family i of submanifers as the x tilde m, which is x tilde of m1 and m1, with respect to m tilde. We consider the zero section of this deformation, the inclusion s, little s, the set omega given by t1 to tl greater than 0. And we get this commutative diagram. We have the inclusion of the zero section and the set omega. And we have the projection from x tilde to x. Starting from this diagram, we can construct the functor of multi-specialization, which send a shift to an element of the zero section of the normal deformation. We take a shift, we take an image by p, we cut on the sets of parameters greater than 0, and then we restrict to the zero section. And I show you an example on dimension 2 to understand what is this specialization, which corresponds to the setting considered by Majima when he studied asymptotic functions. x tilde m1 is given by c2 with an extra variable, and the projection is the this the multiplication of this new variable t1 on the first variable. Then we perform again the deformation with respect to m2, which is given by the coordinate zeta10 t1 locally. And we can construct x tilde, adding an extra variable t2, and the projection is the multiplication of the two variables t1, t2 z2. In this case, the zero section is equal to the product of the normal bundle, non-abundant over zero on the on the normal bundle, so m1 and m2. Okay, this is the construction we can perform on c2. And now we consider the pre-shift of Whitney's infinity functions, which are functions which are bounded with bounded derivatives on a relatively compact subanalytic open subset. This is again a subanalytic shift on x. We check a couple of subanalytics subsets, of course, we still maintain the bound, the bound on the function and on all the derivatives. And then we consider the shift of Whitney's holomorphic functions, so holomorphic functions which are bounded with bounded derivatives. From these shifts, we can construct funtially the shift of multi-syntotically holomorphic function, multi-syntotically developable holomorphic functions, applying to the shift of Whitney's holomorphic function the function of specialization, so specializing the shift of Whitney functions. Again, when it's equal to c2, if we consider the Whitney holomorphic function of c2, we can apply to this shift the function of specialization. And if we check on fibers, it is the limit of the fibers of Whitney holomorphic function of product of sectors containing a direction, points of the product of this product of two normal bundles are direction when outside the origin. And this shift, this funtorial construction, corresponds to an analytical object which is very well known, which are magimas asythotics, which are constructed starting from an asymptotic expansion, which is given by a disease, a family of some sort of a formal power series with the holomorphic coefficients, which is an approximate functions. And we say that a holomorphic function is strongly asymptotic, very developable. If and only if for any polysector, properly contained in a polysector S, we have this estimate, the difference from S between F and the approximate fashion is bounded by the power of z1 and z2, elevated to n1 and 2. This is for each subsector containing this. Okay, we have this classical analytical definition of asymptotic expansion. And it is possible to check that a function is asymptotically developable if and only if on each polysector, each polysector, properly contained on a polysector, it is written, so it is bounded with bounded derivatives. And that means that the shift of asymptotically developable functions can be constructed by specializing the shift of written holomorphic functions. So if we know written holomorphic functions, we can talk about asymptotic, asymptotics on sectors. And starting from written holomorphic function, we can consider other subanalytic shifts, which are the shifts of a flat written holomorphic functions and written holomorphic functions on the set, where the set is the union of the submanifers. And we can prove funtorially that we have this exact sequence of specializations. And this corresponds to the boring exact sequences. And we can prove it with this sequence, funtorelism, by studying the homology of flat functions and Whitney functions. This is the boring exact sequence, because when we consider the case of a magima asymptotics, we can check that magima asymptotics are the specialization of Whitney functions. The flat asymptotics are the specialization of Whitney function flats on the axis. And the third term corresponds to the consistent families of coefficients of magima. And the fact that the sequence is exact corresponds to the classical boring rate theorem, which says that if we have a family of coefficients, we can find the holomorphic function, which is asymptotic to the family. So we can obtain a general boring rate exact sequence for a multi-syntotic developable function. Just for fixing the notation, the family of consistent families is given by the institute, the family of f1, f2, and f12 such that f1 and f2 are functions on the first and the second variables, which are asymptotic to these f1, 2, which are just coefficients. And as a last application of some analytic shifts, we can want to see how adding new objects on the category, on the new analytical objects, we can distinguish differential operators. So we consider the shift of things of differential operators. Locally, a section is a differential operator with the holomorphic coefficients. And holomorphic functions are a shift of the model. So this is during the x, x on holomorphic functions by derivation and multiplication of coefficients. And we can define, given a demodular on the ring of differential operator, the complex solfp, which is the complex we obtain applying p to gamma over f. And this complex have a homology of degree zero and one. And the degree zero corresponds to the homogeneous solution of the differential operator. And each one is the co-kernel, the co-kernel. So we have the solution of ps equal to g, if you want. And we see that two operators are equivalents if kernels and co-kernels are isomorphic. So that means that solfp1 and solfp2 are quasi-isomorphic. An easy example, if we consider these two operators z, z is z minus alpha and z is z minus alpha minus one. If alpha is different from one, one can easily verify that the solution of these two shifts are isomorphic because we can, the solution of the first one is z to the power of alpha. And to the second is z to the power of alpha plus one. We won't go on to the other, multiplying by z or deriving and then dividing by alpha plus one. So these two complexes are isomorphic and the operators are equivalent. But for example, if we consider these two operators, z to the power of alpha plus one, z square z plus one, they have z minus one and x plus z minus one as a homogeneous solution. And if we consider metamorphic functions and enable of zero, then the h zero of the solution of z, z plus one is c times z minus one. But the sec for the second operation, the h zero view is zero because the the exponential is not metamorphic. So these two operators are not, these two operators are not equivalent. Even if in this case, the holomorphic solution are because x of z minus one is holomorphic. In general, if we define, if we do this definition of a regular operator, regular depending on the order of the terms and the derivatives, there is this fundamental theorem saying that if p and q are two operators which are regular as zero in the neighbor of the region, they are equivalent if and only if the holomorphic solution are quasi isomorphic. So holomorphic solutions are enough to show if two differential operators are equivalent or not if they are regular. So these of course have a huge generalization in the Riemann-Nebel correspondence proof by Kashiwara and Nebelkut. What about if an operator is not regular? In this case, subanalytic shifts can be helpful. And now we consider the subanalytic shift of tempered holomorphic functions. I said before that tempered holomorphic is infinity functions are a subanalytic shift. Now we consider holomorphic functions. And they have a structure of the x-modules. You have to add this lower restrict in order to allow multiplication with the coefficient, but it does not create any problem. And we consider two irregular operators. So they do not satisfy the, if we check with the definition I gave before, they do not satisfy the definition. We consider z squared is z plus one and z cubed is z plus two. The solutions are the exponential x of z minus one and x of z to the x minus two. And this is not really a theorem. It is easy to check that there are open subanalytic sets containing the origin, the boundary such that hx, x of z to the minus one is tempered and x of z to the minus two is not tempered. For example, in the left half plane. So in particular, that means that the h zero of the tempered solutions of z squared plus one and z cubed z plus two are different because the first one is given by x of z to the minus one times a constant. And the other is zero because on this open subset, it exists, z cubed z plus two is not tempered anymore. So we can distinguish these two irregular differential operators. So, that thanks to tempered holomorphic functions. And this is not possible with the holomorphic because the two exponentials are well defined in the set where in the set not containing the origin. And this can be generalized because there are some particular differential or the differential equation of degree one, which can play important role in classification of differential equations, which are the one which have as a solution the exponential of pz, where z with p is a polynomial on z minus one, z to the power of one c of z, different from zero. All these operators are irregular. And for all these operators, the holomorphic solutions are not enough to distinguish them. So we cannot distinguish them using the shift of holomorphic functions. But this is a theorem of Morando. If these two polynomials are not multiple by a constant, so for a large family of polynomials, all the tempered or holomorphic functions are enough to distinguish the operator. So we can find open subsets such that the solution on one exponential is tempered and the other not. So thanks to tempered holomorphic solutions. So going to the subanalytic site, we can find an object which now can be represented as a shift, which are tempered or holomorphic functions. And thanks to this object, we can distinguish irregular differential operators. We cannot distinguish with holomorphic solution, but we can distinguish a very big family of irregular differential operators. And I want to add that this problem was completely solved with a very huge technique by Daniel and Kashivara. First have to add one variables and then quotient and then go to the category of enhanced shifts. But this is very a technical process, but you can solve completely the problem of distinguishing the irregular differential operators and irregular demolish as well. And now just a final slide about, because I told about this, topologies generalize a minimal shift theory, or minimal shift theory as well. And when we consider the minimal states, we can perform all the construction of the x t, but also we can develop the formalism of six grottendick operation. Thanks to the fact that we can work on fiber. And for this, for various kind of minimal structures, we can define the six grottendick operations and we can define the six grottendick operation. And thanks to the grottendick operation, we can develop very important formulas and computations. So as I derive a projection formula, the right batch change formula, CUNET formula, one category duality and develop the computation of these commonologies. And in this case, this case is different from the case of subanalytic shifts where we could really work on the site and then compare with the classical shift theory, where a lot of these formula were state. In this case, we really work on fibers, we really work on the associated topological space, we work on fibers and the fact that topological space are good because they are normal, we can perform on these constructions. And okay, thank you very much for your attention. Thank you so much.
Let T be a suitable family of open subsets of a topological space X stable under unions and intersections. Starting from T we construct a (Grothendieck) topology on X and we consider the associated category of sheaves. This gives a unifying description of various constructions in different fields of mathematics.
10.5446/55541 (DOI)
So it's really a great pleasure for me to introduce Olivia Caramello who is in University of Insubria in Como and all the Gelfand chair in the EHS and I mean who contributed so much you know to the theory of toposis and she will talk about introducing if you want the geometric theory of toposis and the new theory of relative toposis which with respect to the logical aspect of toposis theory will surely play a fundamental role in extending the theory to higher order logics. So Olivia, thank you so much for this excellent introduction. So yes as Alain said this course is going to be a geometric introduction to toposphere by using the language of the shifts and stacks in relation with relative toposphere and by this I mean doing toposphere over an arbitrary base topos so authorizing a change of base topos in a similar way as Grotten Dick used to do in algebraic geometry with the things. Relativity techniques for schemes play a central role in his re-foundation of algebraic geometry and we are trying to develop similar formalism for toposis. So the plan of the talk is this. So I shall start this course by reviewing the classical theory of shifts on a topological space. In fact it is quite important to take this as a starting point also with the purpose of developing relative topos theory because we want to keep the geometric foundations at the center and so we in fact one of the central ingredients of our approach to relative topos theory will be an adjunction which provides a wide generalization of the very classical pre-shift bundle adjunction for topological space. So it is important also to understand these new developments to start from the very classical theory of shifts on a topological space and pre-shift more generally. Then I shall proceed to recalling the basic theory of shifts on a site so we shall get to the definition of a Grotten Dick topos as any category equivalent to the category of shifts on a small site. Then I will make a methodological interlude on the technique of toposis-dridges because it will be applied both in this course to derive results and also in other lectures or talks at this conference, especially to derive concrete results in different mathematical contexts by exploiting the possibility of presenting a given topos in multiple ways. So this is a sort of basic technique that can be used for extracting concrete knowledge from toposes or more precisely from equivalences between toposes presented in different ways or morphisms between toposes, again presented in different ways. So as when we do Relative Topos Theory actually we are concerned with the study of morphisms of toposes because what is a Relative Topos? Well it's just a topos which we decided to consider over another topos via a morphism connecting between them. So basically doing Relative Topos Theory essentially amounts to studying morphisms between toposes and so we shall investigate morphisms between toposes from the point of view of site presentations of toposes. So we shall describe how one can induce morphisms between toposes starting from functors satisfying the suitable properties. We shall see that there are two main classes of functors which induce in a contravariant or in a covariant way morphisms between the associated topos. These are the so-called morphisms and comorphisms of sites. In fact the morphisms of sites represent an algebraic point of view on morphisms of toposes while comorphisms represent a geometric viewpoint and in fact we shall see that they are in a sense dual to each other. Then I shall also review a classical notion already introduced by Grotendick of the function of between sites, the notion of a continuous function between sites which in fact plays an important role in the context of vibrations which are very important for developing the Relative Topos Theory. We shall see that whenever one has a morphism of vibrations this gives rise to a comorphism of sites in a canonical way which is moreover continuous. So continuous comorphisms of sites as we shall see induce a special kind of morphism between toposes the so-called essential morphisms which satisfy pleasant features and also in preparation for the last part of the course we shall describe these continuous functions also in relation with the vibration. And then as I said the last part of the course will give an introduction to our working progress with Riccardo Zanza on developing Relative Topos Theory by using the language of stacks. So we shall take as a starting point Girard's paper Classifying Topos where the notion of Classifying Topos of a stack was originally introduced. We shall extend many of its results and then we shall also turn our attention to Relative Shift Opposites so we shall introduce a notion of Relative Site and we shall also compare our approach with the usual more classical one that is being pursued by category theories which is based on the notion of internal category and internal site. So we shall see that this formalism of stacks and Relative Site is much more flexible than internal categories and internal sites. It allows a parametric reasoning and it will also pave the way as Alan has mentioned to an extension of geometric logic which is as Loran extending his course the logic underlying growth and dectoposist oversets to some higher order geometric logic provided by the fact that we can change the base topos and so this will allow us to quantify over parameters essentially coming from the base topos in a certain way which is still to be made precise but these are developments that we are pursuing at the moment. So I will present you some of the results we have already obtained in this connection on Relative Topos. For the moment we have mostly focused on the geometric side of the subject but then once all the geometric aspects will be completely clarified we shall also introduce a higher order relative parametric geometric logic corresponding to Relative. Okay so this is the plan and so now we can start with recalling the theory of pre-shifts and shifts on etopological space. Okay so the etopological space what is a pre-shift on the space? Well a pre-shift is simply a way of functorially assigning an open to any open set of the space a set in a contravariant way so in a functorial contravariant way so we want that to any inclusion of open sets corresponds to a function going in the opposite direction which in fact traditionally is is called a restriction map because the idea one has when the basic example of a pre-shift one has in mind is that of all continuous functions on some open set of a given space. So you see that if you pass from an open set to a smaller open set you can restrict continuous functions on that open set to the smaller open set and so you see that in this way to an inclusion of open sets corresponds a map going in the other direction from continuous maps on the big open sets to continuous maps on the smaller one given just by restriction. Of course in general you can have pre-shifts or even shifts which do not look at all like this but this example of continuous functions or other kinds of functions on a space has been one of the motivating examples in the development of the theory and so these maps are still called frequently in the literature restriction maps. Okay so we have said what the pre-shift is then of course there is a natural notion of morphism between pre-shifts which is simply a collection of maps between the sets corresponding to the open sets which is compatible with respect to the restriction map. Now the way I have presented pre-shifts so far is very concrete and explicit but in fact categorically speaking one can simply say that a pre-shift is just a factor defined on the opposite of the category of open sets of the space with the using. So O of X is a post-set category whose objects are the open sets of the space and whose arrows are just the inclusions between them. So using this language we can rephrase what the morphism of pre-shifts is as just a natural transformation between the corresponding functions. So we have a category of pre-shifts on X which is denoted as written in the slides. So it is basically a category of set valued sum. Okay so now we are going to define shifts as particular kinds of pre-shifts. So a shift is a pre-shift which satisfies some gluing conditions. So what do I mean by gluing conditions? Well the idea is that one should be able to define in a unique canonical way a certain global data starting from a set of local data that are compatible with each other. So this idea of gluing from compatible local data is expressed formally by these two conditions in the definition of. So formally the condition well refers to coverings of a given open set of the space by a family of open subsets. So the gluing condition is formulated with respect to such covering families. So for each such covering family one requires that whenever one has a set of elements of the pre-shift indexed by open sets in the family which is compatible which satisfies these compatibility conditions. So you see this is expressed by the fact that their restrictions on the intersections of the two open sets are equal. Then there should be a unique amalgamation of this local data and a unique global datum which restricts to each of these local data. And the uniqueness of such a global datum is ensured by condition one. So condition one ensures the uniqueness condition two ensures the existence. So together we have uniqueness and existence of a global amalgamation of a set of locally compatible compatible data. Okay so now as we did for pre-shifts we might wonder if it is possible to categorify the notion of shift on a topological space. Now there are a few remarks that we need to make in order to arrive at such a categorical generalization. So first of all we remark that the shift condition is expressed as we said with respect to covering families of open sets of the space by families of open subsets. So if we want to replace the category of open sets of our space with an arbitrary category and if we want to be able to formulate a shift condition we need to have a collection of families of arrows going to a given object which should provide a replacement for coverings of an open set by a family of sub-open sets. This as we shall see will be provided by the notion of a growth and diktopology on a category. So this is the first ingredient. Then there is another element which requires some thought in order to arrive at the categorical generalization. It is what you see in condition two concerning the compatibility relation you see for the local data which involves in the topological setting considering intersections of open sets. So of course in general in an arbitrary category you will not be able to have an analog of that. But by using the categorical notion of a sieve we can get around this problem and we will be able to define sieves on any category equipped with a growth and diktopology by using this device. I shall give the details in a moment. Okay but for the moment I just wanted to remark that basically three sieves can be just defined as contravariant functors on a category with a losing set. So we don't need any additional data on the category. On the other hand to define sieves we need to specify a collection of covering families and this will be provided by an option of growth and diktopology. Okay so just a few remarks before going to the categorical generalization. So categorically speaking we can reformulate the shift condition as an equalizer condition because we can consider so our pre-sheves have values in sets so in sets we can consider arbitrary products and so we have canonical maps as written on the slides and so you realize immediately that you can formulate the shift condition for a pre-sheve as the condition that the canonical map going from f of u to the product of the f of ui where the ui form a covering of u. This canonical map should be the equalizer of the two canonical maps between these two products. So it's important to remark that in fact the shift condition is actually a limit kind of condition in the category of sets. And secondly in preparation for the categorical generalization it's important to remark that by using the technical device of sieves we can avoid referring to intersections for formulating the gluing condition. So how does one do this? So given a covering family f of open subset ui of an open set u you generate a sieve starting from that set which in this case will be the collection of all the open sets of u which are contained in some ui. And so by doing this you can basically rephrase the compatibility condition by requiring instead of a family of elements indexed by the open sets in the covering family you take a family of elements indexed by all the open sets in the sieve. And the compatibility relation becomes the relation that whenever you have an open set w prime included into w you should get the equality between the value at w restricted at w prime and the value at w prime. So you see that in this way you have eliminated the reference to intersections and so you can see that by using this idea of sieves of taking everything which is below everything which is essentially generated by composition on the right from a certain family of arrows because you can regard this inclusion as arrows in the category of open sets of x you can avoid referring to intersections and so you can already understand why it is possible to define shifts on an arbitrary category equipped with an essentially arbitrary notion of covering families in it as it will be provided by a so-called growth index of all. Okay so now we shall go in more details about all of this but before this I would like to give some examples of shifts so I have already mentioned the main motivating example provided by continuous functions on the topological space but of course shifts are used in many other areas of mathematics in the context of differential geometry analysis, differential algebraic geometry etc. So for instance you have shifts of regular functions on a variety of differentiable functions and differentiable manifolds of allomorphic functions on a complex manifold etc. So in fact in mathematics shifts arise in many different contexts very frequently shifts appear as endowed with more structure than just the set theoretic one. In fact for instance in algebraic geometry one has shifts of modules or shifts of rings even shifts of local rings etc. So far we have talked about shifts of sets and in fact there is a good reason for taking as starting point shifts of sets rather than shifts of more complicated type of structures or even more general structures. In fact Grotendick himself realized the importance of the first talking about the shifts of sets in order to have better categorical properties when you consider the whole category of shifts of sets on a on etiological space or more generally on a side because if you replace sets with another category in many cases you you lose some pleasant categorical properties. So it is good to define categories of shifts of sets and then try to understand shifts of more complicated structures as relative to these shifts of set value structures. So formally the way this is done at least for geometric theories in the sense which has been explained by by by Lochan in his lectures is you look for instance at a shift of models of a certain geometric theory. In particular it could be shifts of modules or shifts of rings, shifts of local rings etc. You regard this as a model of the theory of such structures. So a model of the theory of modules of rings of local rings formalized within geometric logic inside the category of shifts of sets on the space or the side. So this is the way we can naturally deal with these shifts of more complicated structures. So yes this is an important remark to make from a formal view point. Okay now let's proceed to talk about this fundamental adjunction between pre-shifts and bundles on etopological space which I mentioned in the introduction of my talk. So after this we shall describe shifts on a side. Okay so how does this very classical adjunction work in the topological setting? So first we define given etopological space x a bundle over s simply as a continuous map towards x. We have of course a category of bundles which is simply the slice category. So the top here denotes the category of topological spaces and continuous maps and here I am taking the slice category over x. So the objects are arrows going to x continuous maps towards x and the arrows are commutative triangles. Okay now why is it interesting to consider bundles in relation with shifts? Well because there is a very nice construction which allows us to build a shift from an arbitrary bundle through the consideration of the so called cross sections of the bundle. So given an open set of our space x a cross section over this open set of a bundle is simply a continuous map defined on that open set going to the domain of the bundle such that when it is composed with the bundle map it gives the inclusion of the open set into the space. So of course you can collect all the cross sections over a given open set in effect and this gives the set of all cross sections over the given open set U. And if you think a minute about this you realize that this operation is functorial in a contravariant way in the open set because if you switch from an open set U to an open set V contained in U you get a restriction operation because you can restrict a cross section over U to a cross section over V. This is completely clear. So in this way you actually get a functor a contravariant functor on the category of open sets of x that we value in set namely a pre-shift and it is not hard to show that this pre-shift is actually a shift and this is called the shift of cross sections of the bundle T. So given the fact that we can build ships from bundles it is natural to wonder if one can go in the other direction as well and this is possible through the construction of the so-called bundle of germs of a pre-shift. So suppose that you started with a pre-shift on a space X you can build the bundle out of this by considering germs of sections of the pre-shifts at points of the space. So first we have to define what a germ is at a given point of the space. So a germ is an equivalence class of sections defined over open neighbors of the point and the equivalence relation is what I have written in these slides. So two sections are equivalent in an open neighborhood of, of course the open neighborhoods are different. So they are considered equivalent if there is some open neighborhood of the point contained both in U and in V on which their restrictions agree. So of course this is an equivalence relation so we can take equivalence classes and we can do this for each point of the space and so if we fix the point and we take the collection of all germs at this point we get what is called the stalk of the pre-shift at the given point. And of course we can we can consider the the disjoint union of all the stalks. So it is a union indexed by the points of X and of course we have a projection map to X which is defined in the obvious way so it takes just the point on which the the germs are defined. And we can topologize the domain of this projection map in such a way that it becomes a local homeomorphism. And so what we get is actually what is called the bundle of germs of the given pre-shift. Okay so now we have two constructions one going in one direction the other one going in the conversed direction and in fact what happens is that so we have these two functors gamma which is the functor of the cross sections and lambda which is the the functor going giving the bundle of germs and in fact these two functors form an adjoint pair. So lambda is adjoint on the right on the on the left sorry and gamma is actually a global section well it's a section functor so in particular if you you can apply it to the space itself in this case you get the global sections of the bundle and so in fact a key result is that this adjunction actually restricts to an equivalence of categories where you have on the one hand shifts so the pre-shifts which satisfy the shift condition and on the other hand some particular bundles which can be characterized as being the etal bundles or also called local homeomorphism. So in fact the restriction of this adjunction is what you get by restricting to the fixed points fixed points of the adjunction. So in general whenever you have an adjunction you can restrict it to an equivalence of categories by restricting to the fixed points so this is a general process and if you apply it in this case this is what you get. So this is very nice because it allows us to geometrically think about shifts as particular kinds of bundles namely the etal bundles. This has several pleasant consequences concerning the geometric understanding of the number of constructions on shifts and pre-shifts. So in particular I would like to point out two of these nice insights that such an adjunction brings out. So first of all the stratification process so if you take most books on Topos Fury you will see that stratification is described by using the plus-plus construction which is a technical means of constructing this but not necessarily very geometrically intuitive. So thanks to this adjunction one has a more geometric understanding of the stratification process because in fact we can think we can describe the stratification of a given pre-shift as simply the result of applying successively the two factors forming the adjunction. So basically the stratification of a pre-shift is simply given by the shift of cross sections of the bundle of germs of the pre-shift and you see that this is geometrically much more satisfying because you really get a geometric substance, you get a geometric understanding of what the elements of the shift implications really are. So it's not just the formal quotient construction but you have a geometric realization of such elements. We shall come back to that because in fact we shall be able to get a generalization of this in the context of an arbitrary site and even in the context of stack. So just keep in mind for the moment these features because we shall then provide the generalization in the categorical and structuralistic setting. Okay so we have talked about stratification, geometrically understood. There are also other advantages of this point of view of shifts as a tal bundle. For instance, suppose you have a continuous map between topological spaces and suppose you want to describe the effect on shifts of such a continuous map. Well of course there is a direct image of shifts which is defined in a striped forward way basically just composed with the action of the continuous map on open sets of the two spaces. So this is completely striped forward but suppose you want to understand the inverse image of shifts along this continuous map. Well this is not completely striped forward if you want to do it in the language of shifts. We shall see that you can do it by using a can extension for instance but here by using the identification between shifts and the tal bundles you have a very nice simple description because taking inverse images of shifts on y along f corresponds precisely to take the tool back along f of the tal bundles corresponding to these shifts. So in fact one can show that taking the pullback of an tal bundle still gives an tal bundle and so this is how the inverse image operation shifts actually works. So you see that really this adjunction brings some very nice geometric intuition in the picture. So it has been a question for several years whether one could find a good analog of this working for arbitrary size or even possibly extending to stats and in fact in our joint work with Ricardo Zampa we have indeed provided such a generalization not just for pre shifts but more generally for index categories. So this will be described in the last part of the quiz. Okay so now that we have talked about this fundamental adjunction we can we can go to the the categorification of shifts. So how to define shifts on an arbitrary site. So for this as I already anticipated it is necessary to talk about shifts because you see shifts were fundamental for giving a notion of compatible family of local data without requiring the intersection property. You remember about that. So it is a technical device that is essential for defining shifts in the general categorical setting. So formally what is a sieve in a category? Well given a category and an object of this category a sieve in the category on that object is simply a collection of arrows in the category towards that object which is closed under composition on the right. So the condition is that whenever an arrow is in the sieve the composite of this arrow with an arbitrary arrow should again be in the sieve. So you see this is a categorical generalization of the condition we had for topological spaces. We wanted the sieve to contain any other smaller open set you see and here we get the condition that it should be closed under composition on the right which is just the categorification of it. Okay so sieves are very nice objects in fact because you can make a lot of operations on sieves. In fact you can understand the sieves more abstractly as the sub-objects of the corresponding representable function in the category of pre-sheves. So this is an abstract understanding of sieves as sub-functors of representable functors but you can avoid that point of view you can reason about the sieves in a perfectly concrete way. So in particular you can compute pullbacks of sieves which give rise again to sieves so here is the operation. So if you have a sieve on an object and an arrow going to that object you can pull the sieve back along this arrow to get another sieve. So these sieves consist of all the arrows which when composed with the given arrow belong to the sieve. So it's a very naturally defined operation and in fact it really corresponds to taking a pullback in the corresponding pre-sheve topography. But as you see you have a perfectly concrete description of this operation without involving pre-sheves. Okay so now that we have talked about sieves we can introduce the fundamental notion of a grotendic topology on a category which will be the basic setting for us to define sieves. So a grotendic topology is a way of assigning two objects of a category a collection of sieves on those objects in such a way that some natural conditions are satisfied. So the first condition is called the maximality axiom so it requires the maximal sieve on each object to be in the topology. So of course this is quite intuitive. So of course the maximal thing should be covering. It's quite natural. Then we have a second axiom which is quite important. It's called the stability axiom so it requires that the pullback of any covering sieve should be covering. Again this is quite intuitive. In fact in the topological setting it corresponds to the fact that if you have a covering of a certain open set by a family of open subsets when you pass from that open set with smaller open set and you intersect each of the open sets in the family with that one you still get a covering of the smaller open set. So this is stability. So you see that in the topological setting it is satisfied so it is natural to require it as well on an arbitrary category. Then you have the so-called transitivity axiom which says that whenever you have a sieve such that the pullbacks of this sieve along all the arrows of a covering sieve is covering then the sieve itself should be covering. Okay so this axiom is a bit less important than the other ones especially then well the maximality axiom you can always make it hold without any problems but the stability axiom is really the crucial axiom to have. In fact for defining sieves basically you just need the maximality and stability and because then you can generate a growth and victimology starting from those and it will have exactly the same sieve. So basically not all the axioms of a growth and victimology have the same status. In fact the most important one is really the stability axiom. In any case I mean one often works with the basis for growth and victimologies or some smaller presentations for growth and victimologies and in general it is important to dispose of techniques for computing growth and victimologies presented by some families of sieves. There are formulas for this in my book for instance you can find a formula for computing the growth and victimology generated by an arbitrary collection of sieves and some other techniques for computing topologies starting from bases satisfying certain properties. So in general it is an important theme that of being able to compute a growth and victimology starting from certain sets of data. From a logical point of view in fact as Loran will explain in the last part of this course being able to generate a growth and victimology starting from a collection of sieves corresponds to deriving theorems within geometric logic starting from certain axes. So as you can see it's something quite significant especially when you can really achieve a full description of the growth and victimology it means you have a classification of all the geometric logic theorems that are provable in a given theory. Okay in any case so the sieves which belong to the topology are called j covering where j is the topology and then a site is defined simply as a pair consisting of a category and a growth and victimology point. So which kind of sites in terms of size are we going to consider? Well for defining growth and victimosis one restricts two small sites and by this I mean that the underlying category of the site should be small to have just a set of objects and arrows but for technical reasons it is important also to consider larger sites which still can be studied, can be associated with smaller sites in a meaningful way. So these are called small generated sites and in fact a site is said to be small generated if the underlying category is locally small and admits a small j-th subcategory in the sense expressed in the slides. So in particular it will be convenient for a technical from a technical viewpoint to consider a topos itself as a site with a topology called the canonical topology and you see that a topos in general will not be small but such canonical sites will always be small generated by definition of a topos as a category having a small set of generators. So it's important to keep in mind that while a growth and victimosis is formally defined as a category of sheets on a small site in fact one can extend to all small generated sites without changing the resulting categories of sheets. Okay but this is just a technical point on which we shall come back later. Okay now let's give a number of basic examples of growth and victimologies to get you familiar with the concept of course we can always put the growth and victimology on any category by taking as covering sheets just the maximum ones. Then there is a very nice interesting topology that one can put on an arbitrary category it's called the dense topology and this defined by taking as covering sheets precisely the stably non-empty ones so the ones such that they are pulled back along arbitrary arrows is always non-empty. So this simplifies in the situation where the category satisfies the right or condition namely the property that you see displayed in the slide the fact that for any pair of arrows that would come in a codomain you can complete them to a commutative square so under such hypothesis in fact the pullback of any non-empty sieve is again non-empty and therefore the dense topology specializes to the so-called atomic topology whose covering sieves are precisely the non-empty ones. The atomic topology is very important for several hypothesis in topospheria in particular in connection with the topospheretic interpretation of Galois theory and its categorical generalizations. Okay other examples of topologies well of course the motivating example for us was sheaves on a topological space so in fact we were considering pre-sheaves and sheaves on the category of open sets of the topological space on such a category there is a canonical growth and dik topology one can consider so in fact we postulate that the families that should be covering the exactly those which give covering families in the usual topological sense that is the open set should be the union of all the open sets in the family. This of course can be generalized to pointless topological spaces also called frames or complete item algebra so the frame is a complete lattice in which the infinite distributive law of arbitrary joints with respect to finite needs holds so you can really see this as it were the lattice of open sets of a topological space even though in general you might have frames which are not of that form only the spatial frames come from topological spaces there are many other frames which are not of this form and which can be studied and which can be interesting in their own right and of course on such a frame one can define a growth and dik topology by using joints in this frame thanks to the fact that these arbitrary joints distribute over finite needs which ensures that the stability axiom for growth and dik topologies is satisfied so the topology we we define in this way on a frame is called the canonical topology on the frame. Okay now another set of examples of a different nature so given a small category of topological spaces which is closed under finite limits typically one suppose is that and undertaking open subspaces there is a natural topology one can define on it called the open cover topology because its covering families are precisely given by families of open embeddings which cover the given space so in the sense written there. In fact the open cover topology plays an important role in the construction of a growth and dik toposes in the topological setting in fact with Ricardo Zanfa we have introduced a higher analog of this open cover topology on the category of toposes itself and in fact we have shown that thanks to this one can essentially regard any growth and dik toposes as a sort of petty topos associated with a very big topos related to that by a local morphism by attraction etc and so in fact this idea of the open cover topology is an interesting one so I will not have the time in this course to talk about this result but if you are interested you will be able to read about it in our for coming work. Okay another very important example of growth and dik topology is the Zariski topology which can be defined on the opposite of the category of finitely presented or equivalently finitely generated commutative rings unit so of course this topology plays a key role in algebraic geometry and it admits a very simple intuitive definition so the covering cosives for this topology so I talk about cosives because I switch from the opposite of this category to the category itself these are those which contain the which contain finite families of localizations of the given ring at elements of the ring families which are characterized by the property that the ideal generated by these elements is the wall ring so equivalently this collection of this set of elements is not contained in any proper ideal of the ring of course you understand the geometric significance of this definition you see if you think of the Zariski spectrum of of of ring you can understand you can see that this in fact corresponds also to a more intuitive kind of covering relation at at the topological level in terms of spectrum. Okay then finally Loran has talked about syntactic sites in his course and of course these are very important kind of sites that one can build from any kind of per store their geometric theory in fact depending on which fragment of geometric logic you consider you have different versions of syntactic sites so if the theory is regular for instance you have a regular syntactic site if the theory is coherent you have a coherent syntactic site if the theory is geometric you have geometric syntactic sites so different versions of syntactic sites but which will present always the same classifying topics and so you have different different ways of say embodying the syntax and the proof theory of a theory in a site which actually presents its classifying topics in fact it's quite interesting also to compare properties related to different fragments in which a given theory can be considered there are compatibility relations existing between different fragments and for instance you can understand them very well by using the bridges because you have just one classifying topics and different presentations for it provided by these different fragments and the point is that you can understand several invariants from these different points of view and they will give rise to such a compatibility relation so in any case this is just a remark so you should want to know more about this you can take my book and you will find several results several compatibility results of this kind proved through bridges okay so now we are ready to introduce SHIBs on a site so SHIBs are defined in the obvious way so simply as contravariant functions with values in sets defined on the given category then for defining SHIBs on a site we have to talk about compatible families of local data indexed by covering families in the Grotten Dictopology so we define a notion of matching family for a sieve of elements of a pre-sheet so this is defined as a way of assigning to each arrow in the sieve an element of the pre-sheve in such a way that this compatibility condition you see is satisfied you see that here the sieve condition is fundamental because you see that here I am considering the composite of f with g so I am using the fact that since I had a sieve and f belongs to the sieve also f composed with g belongs to the sieve and so it makes sense to consider that element because I should have an element for each arrow in the sieve and so I can formulate this compatibility condition and so this is what a matching family is and then for such a family we can define what should be an amalgamation so an amalgamation should be a single element of the pre-shef at the given object which is sent by the pre-shef to all this local data along the arrows of the sieve so you see everything is very very natural very unsurprising okay so again as in in the topological setting we can formulate the sieve condition in terms of the equalizer and so by considering all the pre-sheves on a category which are sheaves with respect to a given growth and diktophology we we get the category the category of sheaves which is what is denoted like this so sheaves on cj will denote the category of sheaves on the given site and as arrows the natural transformation between these sheaves regarded as pre-sheves just to remark the sheaf condition can be expressed categorically in a very nice way so you see I mentioned that a sieve can always be considered as a sub-object of the corresponding re-presentable so you see this vertically in the triangle and the sheaf condition can be formulated as a sort of extension condition so a matching family for a given sieve can be faulted as a natural transformation from a given sieve to the pre-sheaf this is quite clear because you see the compatibility condition is amounts precisely to naturality and and so you see the the sheaf condition by the yonida lemma can be formulated by saying that every natural transformation defined on a covering sieve admins a unique extension as in this diagram okay so finally we can define what a growth and diktophos is so a growth and diktophos is any category which is equivalent to the category of sheaves on a small site or more generally a small generated site okay so we have got to the central notion of this course examples of toposes well here are just three classes of examples of course there are infinitely many examples but I selected these three just to show you very quickly how general toposes are because you see the first example deals with categories so whenever you have a category you have an associated topos the topos of three sheaves on that category which you obtain by keeping the category with the trivial topology on the other hand if instead of a category you start with a topological space you have also a topos associated with that the topos of sheaves on the topological space also you can decide to start from a group and to consider the category of actions of the group on discrete sets so you can also take the group to be topological if you want and in which case you take the continuous actions of the group on discrete sets and you can show that in this way you indeed get a growth and diktophos because you can present this topos as the topos of sheaves on a particular site so the site was underlying categories that of non-empty transitive actions on the topology is the atomic topology which we introduced here so you see already these three basic examples show you that toposes extend categories topological spaces and groups so given the fact that all these concepts play a central role in mathematics nowadays you can understand why toposes have a very big potential to have an impact in essentially across all mathematics because they simultaneously generalize all of this and there is much more to that because of course as we shall see toposes can be attached also to other kinds of entities. Loran has talked about how to associate the toposes with theories etc and there are still many other approaches to the construction of toposes that one can introduce.
This course provides a geometric introduction to (relative) topos theory. The first part of the course will describe the basic theory of sheaves on a site, the main structural properties of Grothendieck toposes and the way in which morphisms between toposes are induced by suitable kinds of functors between sites. The second part, based on joint work with Riccardo Zanfa, will present an approach to relative topos theory (i.e. topos theory over an arbitrary base topos) based on stacks and a suitable notion of relative site.
10.5446/55549 (DOI)
So, yes, I'm Teresa Smith. I'm from the University of Bath, which is the campus that you can see on your screen there. And I'm going to talk about a project that I've involved in, which is looking at tracking the spread of COVID-19 by looking at wastewater. It's just a bit of an overview of this talk, so I'm not sure how familiar the general crowd is with the idea of tracking population health through wastewater. So I'll give a bit of a brief intro to that and then talk about why it's of interest for COVID-19 specifically. And then I'm going to spend some time talking about the problem within my project that I am most interested in, which is related a bit to handling spatial data in R. And then at the end, talk about some further challenges about trying to model the spread of COVID-19 through wastewater. Okay, so wastewater-based epidemiology, which I'll abbreviate WBE, is a field that's been around way before COVID. And the idea of wastewater-based epidemiology is that you can measure the health of the whole community through one sample. So people live in communities, and what we consume, what's affecting us, gets expreted into the sewer system. So we're all expreeding little compounds or bits of DNA or whatever that can say something about what we've been doing that day or what might be affecting us that day. So what pharmaceuticals we've been taking, whether we've been drinking coffee because caffeine gets excreted or metabolites of caffeine, whether we're smokers, but also things like little snippets of DNA or RNA from pathogens that might be infecting us. So we excrete those. They go into the sewer system en masse, so get collected up and pooled through the whole community that is contributing to that wastewater treatment plant. You can then grab a sample, and there's different ways to do this sampling, but you get a sample, take it to a lab, and then you analyze the constituents of that sample from the wastewater treatment plant and then try to calculate whatever you're interested in about human health. So as I've mentioned a few examples, so how many people are drinking caffeine, that sort of thing, maybe that's not the most interesting thing, but they have been using this to say track illicit drug uses, like looking for cocaine in the wastewater, things like looking for antimicrobial resistance, so looking for the snippets of DNA that are related to resistant genes and bacteria. So that's what this sort of system and pathway and pipeline was built up before COVID-19. And this is a picture that's from a recent paper of some of the people at the University of Bath team. Okay, so now it's coming, now wastewater-based epidemiology is coming into its own for COVID-19. So I think probably one of the places where this is getting the most attention is in Australia. So this is a picture of Melbourne and it's a screenshot of a dashboard from Melbourne and then the areas around Melbourne. And Australia is using wastewater-based epidemiology in the sort of what I would call qualitative sense, so they're doing presence absence. So these orange exclamation points are whether there was a hit basically, and then it's kind of hit and miss. So is there COVID, isn't there COVID in, it measured in a particular either a pumping station or a big treatment plant. And this makes sense in Australia because their cases are low. So hit and miss is something very useful to them. So this is an example of a dashboard that you could go and look at today to see where other hits and misses in terms of finding those little bits of DNA for COVID-19 in and around Melbourne. But it's not just Australia, it's not just the UK where I work, it's really all over the place. So as an example, if you look on Google Scholar for wastewater and COVID-19, this times there's 23,000 papers, which over a year now, it's only been going on for a year and a half, two years, so that's a lot of papers to be produced about this. I have not checked that every one of these 23,000 papers is actually about COVID-19 and modeling COVID-19 for the wastewater, but I did look scheme through them at at least hundreds and hundreds of them. There's hundreds and hundreds of papers trying to accomplish this task based on, so trying to accomplish a task of being able to figure out where the COVID is based on these pooled samples from the wastewater. And if you're really keen, and there's a whole Twitter community around this, so this aptly named COVID poops, Twitter handle, and they'll tweet the latest papers and retweet different people who are working on this. So if you are keen after this talk, go follow one of the latest developments of the COVID poops Twitter. But you can see also in the background that they collect, they have this dashboard on their website of where all the groups working on it. So there's lots of groups working on wastewater based epidemiology for COVID-19 in Europe and in the States and Canada, as well as populated areas of Australia, but then there's groups all over the place as well, for people who are trying trying to do this. This is an example from the UK that is quite close. This is not my it's not the data that I work on, but it's publicly available data from the government where they, for all of the wastewater treatment plants, they then measure not something qualitative like in Australia, they measure something quantitative. So they're measuring the gene copies per liter for a particular snippet of DNA in the measurements that they get from the wastewater treatment plants. And then these are averaged, these are weekly averages from June. And this box here in yellow is roughly the region, the project that I'm working on is studying. So a couple of observations about this figure. As I said, they're doing something qualitative or they're doing something quantitative, not qualitative. So it's not like the Melbourne figure where it was kind of on off. It's trying to measure a quantity in the wastewater here. So there could be a little bit of COVID in the wastewater or a lot of COVID in the wastewater and everything in between. Another thing that I noticed about this picture is that the wastewater treatment plants are represented by just points, basically. I mean, they're dots big enough so that you can see the data that's been collected from there, but they're basically point level data here. And they're represented as point level data. And that's part of what I'm going to talk about today. But the main idea here is that we can use these gene copies per liter that was being depicted on the previous slide to tell us about how many people are infected in the catchment, which is the area that contributes to the wastewater treatment plant. So the idea is that if, say, people kind of excrete roughly the same amount of these gene snippets when they're ill with COVID-19, then there should be a rough proportionality between the number of people who are infected in the catchment area and the number of gene copies that you're measuring in the wastewater treatment plant. I don't know what this constant proportionality is, but the hypothesis is that maybe we can find that there should be some sort of relative amount that tells us if the wastewater measurements increase by this much, then we expect cases to kind of increase by that same amount. So the aspiration, then, of this field is that we can develop a model to relate gene copies per liter this week to the future number of infections. So finding it not mathematically at all. I want some model F. It really could be anything that you want. I've seen everything from random forests, just really simple linear regressions, to really complicated machine learning things, where we try to relate the number of infected next week to the gene copies per liter this week. Doesn't have to be weak. I've written it as weak, but number of gene copies per liter at the wastewater treatment plant today, what does that tell us about the number of people who are infected tomorrow? And part of the idea here is that you start excreting these little snippets of DNA from the SARS-CoV-2 virus into the wastewater at least a couple of days before you would go and get a test yourself. So ideally, this would be picking up changes in cases well before you would see an increase in cases from the rest of the usual public health testing, like people going in and seeking a test because they're feeling like they have symptoms. So someone will start putting SARS-CoV-2 genes into the wastewater before they would go and get a test. So you might expect that the wastewater will go up before you would see cases go up. Okay, so what ingredients do we need to train a model like this? Well we need a number of people who are infected in the catchment area in order to train a model like this. So I showed you on the previous slide that we have measurements of the number, we have measurements of the gene copies per liter in each wastewater treatment plant or subset of the wastewater treatment plants in England, but how do we then relate those figures to the number of infected? Well we need to figure out how many people are actually infected in the wastewater treatment area. That's a lot easier said than done. And part of it is, well a big part of it that I'm interested in is because wastewater treatment catchments and public health reporting units, whatever they might be in your area, are spatially misaligned, meaning the geographic area that contains people who will contribute to a particular wastewater treatment plant is not the same as the geospatial polygons that the COVID data, the clinical data is being reported on. And these catchments can be really funky. So there's like no hope of them never, there's never hope for them being spatially aligned. So these are the wastewater treatment plant catchments in London. And they're one, they're not a partition. So the public health reporting units are usually a partition of your, well almost always a partition of your space. So they fully, they fully over, the unit of them is the whole area. So these aren't a partition, there are gaps, which are probably things like, at least in the center of London, these are probably things like parks. So nobody lives there anyway. So the wastewater catchment areas are in color. And then the gray lines here are the boundaries of the burrows in London. Some of these burrows are completely covered by one wastewater treatment catchment. So everyone who lives in this burrow is, their sewer, their sewer system is the green one. But then there are others where there's some, where there's some misalignment. So some people in that burrow will be going to one wastewater treatment plant and some would be going to another wastewater treatment plant. So what did we do about this? Well, we've done something really not particularly complicated. So in our area in the southwest of England, where I am, so Bath Bristol area, the smallest publicly available data for clinical cases, COVID-19, it's called the middle layer super output area, MSOA. You can get the shapefiles for this from the Office of National Statistics and they're, they're not enormous. They're sort of 5,000, 10,000 individuals in these areas. So they're fairly, they're not the highest resolution area we have in the UK, but that the highest resolution where they'll release the COVID data. And from these, we can get weekly confirmed cases. So let's let weekly confirmed cases in an MSOA be Y and then subscript W for weekend and for MSOA. From the wastewater companies, you can also get the catchment files. So these are not publicly available data, but through our relationships with the wastewater company, we know where the catchment boundaries are. Those funky shapes, we know what they are. And from this, you can calculate using pretty standard geospatial tools and are the percentage overlap between the, the MSOA and the catchment. So I'm calling this percentage overlap PMC. And then we're just doing a weighted sum basically. So we sum up, if we want to know the number of cases in the catchment area, and we catchment area C and week W, then we, we, we take the original data that we have and then we, we do a weighted sum based on the percentage of spatial overlap between the catchments and and our, our MSOAs are our report, our public health reporting units. Okay. So there's a problem with this, which is it works really well in cities, at least the city that where I am sat. The high the, where most of the people are there, the MSOAs are fully contained in the catchment area. So here we've got about a dozen MSOAs. So this is the vast majority of the population and they're pretty much 100% contained in the catchment area. So if you live in one of these, your wastewater is going to the, to the, to the treatment plant where then I have data on SARS go to, but then there's a few more outskirts, there's some outskirts where there's a small overlap with the wastewater treatment plant catchment. But I'm not too worried about them because the majority of the data anyway is coming from this higher density area. So remember these, these, these MSOAs, they're roughly supposed to contain the same number of populations. So the fact that this is massive means it's rural and the fact that these are small means it's more urban. Now, if I go over here to this, another area where we have some wastewater treatment data, this is from more of a town than a city sort of semi rural town. We have an area where we have these four MSOAs, which is a little bit denser, but even then none of them are 100% contained in my wastewater treatment plant catchment. And then we have more of these outskirt areas where there's a small percentage overlap. So it's not working as well there. And probably the kind of mismatches here, these, this assumption, this proportionality assumption that I, that I've made, which is pretty simplistic, the simplistic assumption maybe having a bigger effect of creating a bigger error in the total number of cases per catchment than it is over here. So this matters in part because these catchment boundaries are not arbitrary. I can't show you, I can't show you the catchments with the area that I, I working, but this is a picture from Sweden where they do, they have their own wastewater treatment dashboards. You can go have a look at it, but here's a picture of one of their catchments, which they do release, which is super funny, funny looking, right? But it's funny looking for a reason. So you can see underneath here, these brown dots, these are, these are house, these are houses, these are building structures. So some of these weird little spindly arms, they're going out to grab a community. So if I just ran, if I just randomly put down points, I might not hit, I might not, if I, if I, I guess what I'm trying to say is that there's a, there's a higher chance that the catchment includes a settlement than the catchment includes some piece, some piece of land where there's nobody living there because that's the point of, at the point of these wastewater, these sewage sewage systems is to go out where the houses are. So this percentage overlap is going to be, is, is of the spatial areas, not really the right thing to do. So I don't want percentage overlap of the geography. I want percentage overlap of the population. I want to know the percentage of my population that is living inside my catchment area. I don't want to know the, the percentage of space that's inside my catchment area because that will probably underestimate the population because, because these, these catchments are meant to go out and grab the people. So what could we do instead? One thing that we're considering to improve this is, is looking at high resolution sort of remote sensing data, which can tell you where the people are, or probably won't do this ourselves because there's loads of other people that have thought really intelligently about how you go from some remote sensing data to then getting population counts on a really, really small spatial scale. So this is a, this is a picture from the University of South Hampton's World Pop project where they have, where they have population that really, at least for my purposes, really tiny grids. So using these alongside our MSOA overlaps and the catchment, I think is, is our next plan so that we can move from this geographic overlap idea more towards the population overlap idea. Okay. So that's one piece of the puzzle. That's the piece of puzzle that I enjoy because it's Tom said, I like kind of geospatial computing and I like doing these sorts of things, but there's many, many other pieces to the puzzle, which, which we need to resolve before we can realistically do this number of infected is some model number of infected next week is some, is some function of the number of the amount of gene copies for later this week. So so far, this is showing data from a city in this is publicly available data showing a city in California. And then the purple is the West, the whole averages of gene copies per liter and then cases in the West. And the thing is, it looks like the peaks match the peaks and the troughs match the troughs, but not always to the same relative amount, which is the problem. So it's not actually relative, it's not this assumption that I had that there's like kind of constant relative proportion that the person sort of tracks out the same amount of COVID-19 into the wastewater if they're ill is doesn't seem to be true, at least not the way that we've been able to measure it. So a peak matches a peak. So maybe you can do some kind of a slope matches a slope, but I can't say that if I have this amount of SARS-CoV-2 in the wastewater, I expect there are this many cases in my community, even within one wastewater treatment plant, that doesn't seem to be consistently true. So there are many other challenges for trying to figure out how to model things which match the peak and whether we can translate that into properly quantitative information, or we can really only do this qualitative, it's going up, it's going down, it's at a peak, it's not at a peak kind of thing. So just to wrap up, so this, as I said, I told you about one small piece and there's loads of other pieces, so it's a pretty big team. So I've given you some of the team members here. The PI of the project is Barbara Caspersic-Hordern, who's been working on this wastewater-based epidemiology for a long time. And then I'd like to thank the funders of this project and you for listening. Thank you, Teresa. Thank you so much. So this was really, more as an introduction to the problem than a solution, I guess. So the floor is open for questions, so please, if you have a question, you can post through chat or you can ask directly, Teresa, but the floor is open for discussion. Maybe I can start. Yes, there's a question, please. Sorry, Teresa, nice talk. Thank you very much. I wonder if you've come across something called the toilet board. It's something I got approached by about a year ago to look at this sort of thing, but actually not in the UK, but in India. And they've got some quite cute projects along these sorts of lines for diseases, I think COVID as well as various other diseases. They put sensors in toilets actually, so it's a bit more immediate in terms of spatial relatedness. It might, I mean, you might, they might have something of interest to you. They're based in Geneva, they're a real thing, I promise you, I thought it was a joke. They've got quite a decent budget, you know, several hundred million a year, the sort of UN levels. So it might be worth you having a look at and see what they do. Can you share a link to that, please? I can try. If you can share to chat, it will be interesting to look at that. Yeah, so I haven't heard of that specifically, but I do know that, so I've talked about community level, community level projects basically in this work, but there are, I do know of kind of sort of tangentially involved in one project that is building level, let's say. So you can go basically pick up the manhole cover outside a building and pick up a sample that is then, is that only for, only gathering data from the people who are in that building. So they've done this on some university campuses in the States and in some other places. I think not necessarily to inform any policy making about that, how, any procedures that they would do in that campus, but more because you get the clearer data. So there are some examples from the US like this time last year where they were requiring all their students to get tested before they returned to campus. And then they know the addresses of those students and then that they can test individual buildings and it's much easier to align the data so you don't have, well, you have much less of an alignment problem. Yes. I think these were better. The toilet board stuff was, the Indians are building various smart cities, they're building half a dozen of them for 250,000 people each, I think. So they're really putting some serious effort into it. And they wanted to build in a disease monitoring system and this was it. I can't remember offhand what diseases they were interested in. I think it was TB. Yeah, so I started a project that was on this list of other stuff just before COVID started. And so TB is something that people are interested in. So I have the GCRF here because that's one of the funders of international research in the UK. So I have some collaborators in South Africa and Nigeria and we were also sort of wanting to have kind of autonomous sensors that can move around and do some sensing kind of like right on this piece of kit that's motoring around rather than having to take a sample, take it to the lab and that sort of thing. I think for COVID that's not at the moment possible to do that kind of autonomous sampling but yeah, I think that is what people are, one of the things that people are quite interested in. It looks like it's a field that's taking off. Yeah. And Teresa, tell us some measurements in the ways, in UK for example, this is like on a daily basis or? It's a couple times a week. So the project that I'm involved in, it's mostly, it's twice a week. We are also doing occasionally some sampling every day of the week so that we can get a sense of what are the diurnal variations and like if there are variations in the week. Like in London for example, you might see something different on Saturday, Sunday and Monday through Friday just because there's more people in London Monday through Friday who work there but then don't live there. So you have these, there's also a whole field of trying to figure out how many people are actually, just try to understand this population moving by measuring things inside the wastewater. So you need many samples a week in order to do that but I think the typical thing is a couple samples a week. In Netherlands I mean they do the same thing and I have a feeling when I look at the plots for Netherlands, Netherlands I mean it's like a more, the catchment is even more complicated because really flat area, right? But I was looking at the measurements for the wastewater in Netherlands and to me it looks like it's a more useful to look at these numbers from the wastewater then for the results of the test because I have a feeling that they reflect much better what is the actual infection because the infection rates you get from tests they depend on the number of tests so it's up to people to get tested. But this is kind of objective, this is like it's always because if there are fixed locations and if they do the measurements then it's kind of objective, right? Yeah, so like in this picture where we're seeing a difference between, we're seeing peaks match peaks but we're seeing that the sizes don't match up quite right in this figure and one explanation for that could be changes in testing for example that you know maybe the overall test could go down but you still perhaps expect to see a peak and that is a challenge here that you wouldn't expect because of changes in the way people get tested you wouldn't expect necessarily this to be a consistent relationship over time as well in addition to the fact that it was never exactly this relation in the first place because you're only ever, you're only, you could only model with the data that we have so I think this exactly fitting this kind of relation is not realistic because we're never going to know how many people were truly infected unless we're in these really small situations like that example I gave with the university buildings and things like that where students were required to get tested. And you tried fitting models to this data, right? You have fitted models and to try to map the infection rates I assume or? Yeah, I think so some of the things that are challenging is just the measurements of the virus are really heterogeneous themselves so it seems, and I haven't talked about that at all, I can post a paper about that if you're interested in that but you can see here that there's like a lot of noise in some of these measurements so it's hard to apportion kind of variability to the measures themselves or like to changes in the actual rates of COVID. So we stuck with fairly simplistic things like sort of like lagged linear or generalized linear models that kind of thing. Okay, let's see if there's more questions for Theresa please. Theresa, a question in the chat from Alan. Is the level of virus in wastewater affected by rainfall? Yes, it is. So in the UK and in part, and I think the east coast of the US for example our sewer systems are old. So the rainwater runoff goes to the same place as sewage like sewage from our toilets and from our houses. So we, we can, you can try to normalize by flow. So you know the wastewater treatment plants they measure how much water is coming in every day and you don't necessarily need to directly account for rainfall because you can sort of implicitly account for it by accounting for the amount of flow. So if you have more flow through your wastewater system, then the contributions from humans are diluted. So you can try to normalize by flow. In other places it's less of an issue. So like in places that have sewer systems that were built this century, it's less of a problem because the rainwater doesn't the contribution of humans as much. You still can get some infiltration. So there's not like there's zero contribution to the wastewater treatment plants from, from rainfall, but most modern sewage systems. And I'm not a civil engineer. So I think this is true that they're segregated. So you get the rainwater runoff going one place and then you get human contributions going another place. But it is something to think about normalizing for that most groups think about normalizing for flow to take into account the effect of rainwater. I think for wastewater to be wastewater based epidemiology for COVID-19 to be feasible, we need all of the other programs. You can't build and train the systems without having those original, those original things in place. A really interesting question, I think, is how much money do we have to spend on the classical stuff, the classical, everyone who feels unwell going to get, going to get tested. How much, how much time, how much money do we need to be investing in that first so we can build a system before we can transition to, before we can really use wastewater based epidemiology as a standalone tool for monitoring COVID-19. There have been some cases in the UK that I aware of where it was really an interesting adjunct to the classical monitoring, which had to do with, which had to do with variants. So in the UK, in the wintertime, if they thought that there was an uptick in a particular variant, if they thought that there was a region that had a variant, then they would do testing for everyone in that community. So if they thought there was an area that had a variant, then they would tell everyone in that community to go and get a test if you were feeling well or unwell. They called it surge testing. And there were a couple of times where they did the surge testing because the wastewater peened a particular variant that they were worried about. So that's one way where even already it has helped, but not in the, again, in more of a qualitative sense. We think that there is a particular variant in this burrow. I think it was actually in a burrow of London and so we're going to go do surge testing there. But I think it is a few steps away from being able to actually say we don't need the case data anymore. You definitely need the case data to train the wastewater models. Actually the time it takes for the sample to go to the lab and then the sample to be processed in the lab. And it seems like the window between when people are shedding the virus into the wastewater and getting it, and then having symptoms and getting the test is actually really small. It's maybe a couple of days. So this time lag of how long it takes to go from the wastewater treatment site to the lab is actually taking up a good chunk of that time. So this is one of the challenges. If it used to be a useful surveillance system, how early are the signal kind of get in it may actually be pretty, only a little bit ahead of the signal that you could get from the kind of classical system. So just to give a little bit more detail, the sample comes to the lab and then they have to do two steps. First they have to extract all the particles of RNA from or try to get all the, grab on to all the particles of RNA in that sample. And then they have to go to, I don't know the technical term, but the PCR machine, let's say, which then tells you how much is in there. So you have to grab onto it and then you have to take it and figure out how much is actually in there. So each of which is pretty complicated and has an ads its own variability to the system. And at the moment, there's no real consensus, especially on the first step of how you should extract the DNA, or the, sorry, in this case, RNA directly from the wastewater. It's not my area of expertise, but sitting on the outside of like lab meetings, there's lots of talk about how, what's the best way to do that. And differences even within the UK about how different labs are doing that. I think in the UK, the answer to that is honestly no, I don't think that they have, have used the wastewater data to decide about lockdowns in the UK, but they are using, so I guess from my research, that's a no, but they are using that kind of information in places where case rates are lower. So I think the reason that they haven't been using it in the UK is that most of the, most of the measurements, they always have COVID in them. Because our cases have never really fallen, fallen too far down towards zero in the UK. And it's not so clear yet how you relate the fluctuations in the measurements in the wastewater to fluctuations in people. That's still, that link is still unclear, but there are places in the world where they have been using it, where the cases are low. So the presence of SARS-CoV-2 RNA particles in the wastewater is a really interesting signal and something that's actionable. And that's not quite true in the UK because of the two things. So the cases, I've never really gotten that low, unfortunately, and we're not, we haven't sort of had how to link quantitatively measures in the wastewater to people.
Theresa Smith is a Lecturer in Statistics at the University of Bath. She received her PhD in Statistics from the University of Washington and went on to work as a postdoctoral researcher in spatial epidemiology in the Center for Health Informatics, Computing and Statistics at Lancaster University from 2014 to 2016. In current role at the University of Bath, Theresa specialises in working collaboratively with multidisciplinary teams to develop predictive analytics tools with applications to clinical and public health. In this talk, Theresa discussed the statistics and data science challenges arising from her ongoing work to develop community-scale monitoring systems for COVID-19 and other diseases using regular sampling and testing of wastewater. More on this project can be found at https://ewsresearch.com/our-projects/ews-c19/
10.5446/54084 (DOI)
Hello everyone, my name is Alin Vojna, I am from Oduev Romania and I am here to show you the easiest web ever made for you to run on your machine and in production. So first let me start with this question, what is Docker? Do you know what Docker is? Raise your hand please. Okay, everyone, okay I am done here. Okay, so let me say this way, Docker is for your system what virtual M is for Python. I know it is more than that but for now just look at it just like a virtual M for your system. So imagine that you have an isolated environment at the system level so you can install everything you want in this isolated environment without messing out the other things on the machine. So you have this isolated environment on all platforms, Linux, Mac or Windows and you have it also is the same in DevL and production. And more than that you can use your favorite Linux distribution to build it. So you can use Alpine, Debian, CentOS or if you are more paranoid you can build it from scratch. So let's see how the debugging, the six stages of debugging before Docker were. So it works on my machine but from now on it will work on every machine. So let's see how do you install Plon without Docker and I took this from the Plon documentation. You have to have install Python 7 with build support for XPath, Zlib and then you have to install all these libraries that on CentOS are different names than Debian and then you have to create your build out chg and then you have to run it and this is also from the Plon documentation. This will take way too longer and it may give you some syntax errors. But now with Plon, with Docker, all you have to do is to do Docker run Plon and yes, it's official. So now the Plon is in the end to Sven. So after a long process of acceptance, they finally accepted the Plon image in the Docker official repository. So now you can run it directly like this. So now how do you test addons with this image? You can just add them as an environment variable with spaces in between. You can also put the version here like because you have faster navigation now is not released. This command will take previous version but if you put 10.0 release candidate one, you will get that one. How do you develop Plon addons? You make sources directory on your machine, you get clon code in there and then give Docker permission in that folder and then with the environment var develop, you say Docker and Plon to develop this addon. And you mount this CRC folder inside the Docker container. How do you debug? For that, you have to enter in the Docker container and start from there. Storage, where is my data FS? Well by default, this image is using Docker volumes. So when you run Docker, it will create a critical name for your volume and the default location of Docker volumes is in var live Docker volumes. You should use Docker volumes labels because if you don't, when you remove your container, you will also remove the volume. So you may lose data in this process. That's why use label volumes and then you will have to explicitly remove them if you want to remove the Plon container. Now Zeo also put Zeo in this image and it was a suggestion from Docker guys because there is no official Zeo image. So the guy says, but you have Zeo in the Plon, why don't you just do it like that and we did it like that. So you can run Zeo and then you start your Plon clients. How do you extend this image? Because you should extend, you shouldn't abuse the add-ons environment variable because when you do that, when your container first run is created, it will rerun the build out. So you should test like in this way, but in production, you should create your own Docker images. So you have your build out, GFG and you put your eggs here and then you create your Docker file and save from Plon version, user root and also install system dependencies and then copy extended build out in the Docker image and then run build out. And then you build it like this. More documentation is on Docker docs. Now the orchestration. For the orchestration on your machine or on a single host, you have from Docker guys this Docker compose tool which we install it like pip install Docker compose. And then you have the YAML syntax and put it like that. And you can also, I mean, you don't have to define the Zeo clients 10 times. You just use Docker compose scale Plon tree. And here I also added a load balancer for this Plon and here is a demo. So this is our Docker compose file. I do the Docker compose up minus T and then starting the services. And you can see the logs, combine logs or on all of the services. And then Plon is taking a while, but now it's starting. And you can see our Blon site. If I go to the load balancer admin page, I can see that I have one backend here. Now if I do the Docker compose scale, we'll have more backends. The backends will appear. I mean, this image I used for load balancer is automatically discovering backends based on environment vars. And now you can see that we have the backends, the half proxy backends up and running. And here I create a Plon site, but you know how to do that already. So we can skip that. How do you do orchestration on multiple hosts? And here we have the Docker solution, Docker swarm. And there is also Kubernetes by Google and also the cloud providers started to add their all integration for containers. Also Azure has an integration. But now today I'm going to tell you about Rancher because this is what we are using for in the past year. And I have more experience with this. So what is Rancher? Rancher is a complete platform for running containers. And how do you install it and run? You just look around Rancher server. And here is another demo. So here is my Docker compose for Rancher server. I already started. You can see it in the background. And let's see. I don't have any host registers. So I can register them directly by Rancher. So you can use the custom method to register by running a command on the back ends. Or as you can see Rancher has already integrated some clouds in its interface. So here I'm using packet. And I have some tokens. So I have to wipe your memory after that. Now this is just a demo setup. So here I added some labels for these machines to know that these machines are for as blown back ends. I will also add some machines on digital ocean in the same way. So I will name that by digital ocean. And I can scale how many machines I can create. And here again the API key. I can also select what type of the size of the machines, the region, and also the labels. So you can see now here that the machines are creating. And also on packet. They are popping in. And now they are activating and now they are ready to deploy a clone on them. So here I will add a container, just a single container of the clone to see the interface what they offer. So you see the image, you will map some ports. And also you can see here you can customize commands, environment variables. And also I said where to run this Docker container. And because it's the first time, it will take a while because it's now it's pulling the image from Docker Hub and extracting it and all the downloading part. And this is also not a very performant machine. You can see that now it's up and running. It's green. If I go to that machine, I have a clone running in the cloud. I don't know what I'm doing here. I don't want to create a site. Okay. I cut that part. So but also I will remove this for now and remove it and then I can just purge it. And from the catalog, last week I added a request to the Rancher catalog, community catalog. And you can see that here you can define custom, we have a custom deployment for clone. So here you can have add-ons here just to test them. You can scale how many clone sites you want, how many clone ZO clients you want. And here you can see the output because you can also grab this output on your machine and run it with the Rancher compose from the terminal. So here is my stack with ZO server. So ZO server is now starting. So you can see that no clone is waiting for this to start up. And it's starting because it's also extracting and the bucket machine are still creating. But this cloud provider is very interesting because they say that they offer bare metal machines. I don't know how, but they are very fast. I tested it and compared with Digital Ocean, it seems very fast. And you can see that the clone backends spread across my machines. And now they are starting initializing and one of it died from some reasons. But I let it here in this video to show you next what happens. So you will see that this got restarted by the Rancher health check. So there is a health check that sees that your container are still running or responding on the port on TCP port 8080 or you can add also health check for HTTP request. So I mean it shouldn't only reply on port 8080 but also to not have a 500 error HTTP 500 error. And now I have a phone running and also the load balancer is up and running. And if I go there, I can create my my phone site. Let's see, ooh, so this is our plan. You can see that this is for the EA website. This is in draft version two, but we plan to do continuous deployment with GitHub, Docker, Jenkins, and there are some tools from Rancher, Cowbell. And I mean, you just get on, you will get push on GitHub and it will go to production after it goes to these checks and tests and final steps. So you can start from Docker Hub, plan official image, then we will have it in docs.plan.org. Sven promised me that this print we will have there and we will also add a link on plan.org to download to the installing section. And also is on Docker comb, Rancher comb. And if you need examples and big deployments, I don't know, Rancher compose examples, Docker compose examples or deployment examples, you can just go to GitHub or repository and query for Docker and you will get plenty of repositories and documentation and examples and how to... Now I like to thank Sven for helping us, making this possible to Antonio DeMarini for pushing things and not stopping pushing things forward to Oduep for letting me do this and of course to my wife for not letting me look like a geek today. Okay, thank you. Questions? I didn't expect that. Do we have a microphone? Well, personally, this is the first time I hear about Rancher. Can you tell us a little bit more about it? About Rancher? Yeah, Rancher software. Yeah, Rancher software is open source, you can run it on your machine. There is a big community around it. They have a forum with a lot of questions and replies and they are very responsive on things. So, yeah. Let's see, you saw the interface but you can also run it from the terminal. Up. Okay, up minus D. Okay. So it says that here you can see that the zero server is not in sync with the deployment. So you can do the upgrade from here. Upgrading zero server because I added, so I took the, where is it? Let's see, where are you? Too many screens. Okay. Okay, so it says here, yeah, we missed the upgrading part but the upgrade is, you know, it's stopping your previous container and it will create another one. And this is because now you see here that there is a finish upgrade. If you don't do that, you can do the rollback. I mean, if something is not working, you can easily rollback or if you finish the upgrade, it will delete the old container. What else I can tell you about it? Let's see how our range looks like. This is a development environment. You can see a lot of testing here. And the production. Also, you have this environment so you can easily applications and stocks by teams, by I don't know, by, because this one, also from the Rancher catalog, if you put this in the Rancher catalog, you will get this upgrade button when something is new and you can do it directly from here. But as we are, as we like the terminal, I prefer the terminal and what else? You have users. You have, so if I go to manage environments, here you have the users and the, you know, the roles. So the member can deploy stuff. The read only can see only and the owner can have, because you can, you can see here that I don't have the full rights. But here I have also the admin and you can see all the actions that are happening in here. You can also deploy it on multiple nodes, because when you have multiple environments, it gets slower, but you can, you can, you can work on performance because you can deploy it in multiple nodes with MySQL outside from the Rancher server and all that stuff. Yes. How quickly can you go from bare metal to the cloud? You can, you can also register bare metal machines here. I mean, you don't. When I said bare metal, it could be a Docker instance, it doesn't have to be dedicated server, but just from nothing to something. It depends. I mean, the, the, the most of the work is in preparing the images and doing it right, because you shouldn't, you shouldn't mount things from the, from the host. You shouldn't mount configuration files from the host. You should use most of environment variables to customize your, your deployment and I don't know, it can from one hour to one month depends. Okay. Thanks. Yes. So I have two questions. First, the base Docker image for blown. When you do Docker run, does that run build out at that moment or only when you provide environment variable with the, with the products? It will run the, the build out only when you provide the, the add-ons. Or the devil or the ZC this, this one for the zero server. There is a script in size. So I'm not running, re-running build out here. I just do a replace in the zeo conf and the plon conf to plon configuration file to, to use the zero server. Right. So the second question will be when I started playing with, with Docker and blown, I first thought on doing the, like a base blown image with the base packages and then extended to have additional X, right? And, but what I found is that my, my base blown image has some size. And when I use it to run build out again to extend it and have two or three packages in, in addition to those, I get the size of the X folder, like duplicated because it's running again build out and it's touching the whole directory again. So how did you manage to solve this? Or you don't, don't worry about the size increase. No, no, no. We care about, because you can see here that we do a lot of cleanup at the end. And you have to, to tell doc, to tell plon where to, to store this X because I, I think you'll build out is using another part for the X or you have them still in the build out cache X directory. Because if you look here. But what I found is that since Docker uses like a layered file system, even if you remove it from the Docker file, even from the recipe you say remove this, it will still have the space there anyway. Can Sven? Well, when we designed the Docker image for, for plon, like, obviously we were thinking really long how to do it. We decided to have some kind of compromise. We tried to take the best known things from Docker like you should do and from, from plon. For example, that you now can say with an environment variable at X and then with it is running. That is a trade off that it's easy to use. What, and the trade off is it's slightly, it's still small image, but it's slightly bigger than our hardcore customer image. And what I, for example, do on my own setups to avoid your problem. The way I have a Docker image only for building and then I just grab the compile from out of this image and push it into another image. Or I have an old built machine fabric only for building and removing layer. I avoid too many layers. I mean, this is, if you're talking, it's cool to do it and it's working well, but then we're really going to cool, exciting Docker stuff. And we thought for the official image what we're using now, we compromise. So it's easy to use and it's running stable and it's fine and it's also not that crazy big. But yeah, trade off to make it kind of easy, easy to start. We use this as we have still compiler inside the image that if you have an egg, if we add an egg with their environment, it will run build out. If you think you don't want it and my image is getting too big, then you should start making your own images. But these are fine. They are running in production on several sides. And we really have to think, I learned because he did most of the work. Anyone else? I think I'm wondering how in this approach and in the multiple sites approach, how do you handle, I think maybe the Kina this morning would be a good example, how do you handle it when there's important data in the data FS file or block storage? How does that work? Like security issue or, I mean, the question is, is this secure or where does the ZO server end up? Yeah, I mean, how do you handle it if there's kind of application? Like this morning she talked about building these entire sites through the web, so that's not going to the file system at all. So how would you handle a product like that where the user is constantly adding to it and building on it? And there's critical data that you need to deploy a test version of or something like that. So you're asking me, how do you put the data FS here? Just how would you handle it? How would you handle the context of Docker? For this, we have, I mean, there is nothing from Plon but at the A we created a Ersync Docker image and you can mount it, you can run the server on the source host and the client on the target host. And they get, let me show you that. I mean, you can always go to the machine and put things from the machine, SSH on that machine and Ersync. But if you don't have access on that machine, then we have this Ersync Docker image. And what does it create? So you create a client and you create the, we also, you can see that we have also Chrome task. You can, to the client, put crons on it. So to sync, I don't know, nightly or, and we are using this on staging and development. So this one is syncing from production every night to staging and from staging to development environment. And it get, it should be somewhere, yeah. So you run the client and then the client will generate SSH key and then you run the server with the SSH key as an environment variable. The SSH key, the public part of it. Audience member expert commentator we are not using because containers can die at any point of time, right? They're designed to be stainless and died at any time. Exactly. So how you handle data. So imagine your companion is a database, is dying, how you see this data to another container, how your site still got a five-form data. We are not using, the thing is that we, at the A switch to Postgres and Rural Storage, and we have now one Postgres master and one replica. So it replicates in real time, and if something happens, we can easily switch to the replica. And we suppose, so you're using Postgres since the ZODB, but is Postgres, what do you use as a backup? Where data from containers from Postgres where they store? You can always use Postgres as without containers, but we want to also do that. And I mean, the data is not in the container, is in a Docker volume. That's why I said, don't use labeled containers, use labeled volumes, and because the volumes are persistent, and you can enforce. I mean, here you can enforce, with these labels, you can say Postgres go only to this machine and only to this machine. So if you have a high availability for Postgres, you create three machines, one master and two replicas. And also for the front end, you should have one for Apache varnish memcache, and another one and then the high availability, there is also the DNS high availability. We didn't try it. We know it exists, but we'll do more in the next month. But for... Yeah, question. Do you, when you run the clone in Docker, what user do you run clone as? A lot of Docker hub clients run everything as root, which is insecure. Does it run as root, or does it run as a clone user? That's why here I said, because you will fight a lot with this user. Let me see where is it. See? So there is a dedicated user. In the container itself, that will run by default unless you give a Docker user to use in that container. That container will run as a root. No, we fall back on clone. So if you look at the Docker file, you'll see at the end, we... Yeah, that's what I was asking. You can see that the user is clone. And have you, one more question. Have you ever looked at OpenShift instead of Rancher? No, but maybe it's when. I saw an experience. I run also OpenShift stuff, not so clone OpenShift. OpenShift, I don't know. Does it work pretty well? So when you work, you should come to the showdown later today. Anyone else? Good question. It's possible to run OpenShift self-hosted, or you're talking about OpenShift on red hats. We can do both. You can do both. Okay. Thank you.
Easily develop and deploy Plone based application stacks with Docker and Rancher.
10.5446/54087 (DOI)
Welcome everybody. I hope you had a good lunch. I hope you're not here for a nap. I'm Jeff Bowman. I'm the IT manager for the Mountaineers. There's my email. Feel free to contact me with questions or hit me up in the hallway out there. First of all, quick overview, what I want to talk about today, who we are so you can get an understanding of what our needs were. We're an end user of Plonin Salesforce. So I'm not a developer, though I did plenty of 410 programming as a grad student, some other stuff as a chemical engineer, managed some projects, written a little bit HTML, no SQL pretty well from our old database. So I know a little bit, but I'm by no means a developer. So this is definitely end user talk. I'll take you kind of through our high tech journey of launching a new platform and then how Plonin helps us, how our instance of Plonin Salesforce work together, and then a little bit about how we use Salesforce and what we use it for. So who are we, the Mountaineers? Our motto is explore, learn, conserve. Our mission is helping people explore, conserve, learn about, enjoy the lands and waters of the Pacific Northwest and beyond. We're headquartered in Seattle, but we have trips go out globally to a certain extent. We have a publishing reach of all over the world too. So that beyond part is important to us. Well, we're in Boston and quite honestly our history, even though it's 110 years, doesn't quite match Boston, but we've been around for a long time. So we have a lot of ingrained things in our existence and we've added a lot over the years. We serve a lot of diverse audiences. We have kind of the general public, climbing community, hiking community, sea-kai-ikin community. We do tons of stuff outdoors. So we have tons of little communities that interact with us. We have guests, people that come try stuff with us without actually buying a membership or doing much beyond that. Some of those people turn into members who pay annual dues to be a member and do lots of stuff with us. We have a very rich volunteer corps at all levels from our board of directors all the way down to somebody just leading a hike or helping teach a course. Of course, we're 501c3, so we have donors, people who give us money. That's something relatively new to us and part of why we adopted some of the technology that we did. And of course, we have a staff. So we have people that actually get paid to do stuff for us and all of these groups need access to web tools, other technology tools. And of course, we really don't cater to any one age group. It dulls youth, families, pretty much everybody. So we have a pretty broad range of needs that need to be served. We do many things, as I mentioned. We have activities and trips. We offer courses, clinics, and seminars, special events like film festivals, youth programs, book publishing. And just to put some numbers on it, in the last year, we had about 2,200 trips with 3,000 unique individuals participating and 14,000 total. So each person doing more than one trip. For courses, those numbers are about 217, 3,600 unique individuals and 6,400 total students. Conservation, we have a pretty rich history. We have done a lot with the Worldness Act and a lot of lands in Washington are preserved because of stuff we've done. Special events are things like film festivals, gear grabs, fun seminars at our program centers. We do youth programs. We have a youth outreach where you work with youth groups and schools to put on outdoor programs. We have some different age group year-long programs for students. We call Pioneers Explorers and Mountaineers Adventure Club. We also have a summer camp that runs about five or six individual weeks in the summer. And finally, we publish books. We have about 500 titles in print of all flavors of guidebooks, instructional books. This is Mountaineering Freedom of Hills. It's our flagship book that kind of launched our book publishing in 1960. It was our first title and still kind of our breadwinner. Here's a little bit about our high-tech journey. The first thing, first database I heard we have was a bunch of ring binders on a lazy Susan on a round table with our member services team around there taking phone calls and writing stuff down in the book. That eventually morphed into our relational database called A-Rev. And then we got this kind of nice IMS system by Advanced Solutions. And it was a very data-driven website. It didn't look too good. It wasn't too easy to manage. You had to know cold fusion and HTML, stuff I learned, SQL. So those are some skills I picked up when working on that old website. Somebody decided we should give it a facelift. We called the project, lift it on a pig, made it look better. But it didn't work any better. You still had to know HTML to do anything on it, make anything look halfway decent. So we decided we needed something new. We started looking. We gathered feedback from our members, our volunteers, our guests. We use this wonderful system called User Voice. It's a two-part system. It's got a way for people to say, I suggest you blah, blah, blah, blah, blah, and it can tell us other people can vote for it. People get up to, you can specify how many votes, but we gave them 10. Kind of think about what's most important to you. So that kind of helped drive what we want to do with our system. We did the other side of it is customer service. And we use that for incoming. This is broken, I need help with this kind of stuff. After a long RFP process, as complicated as we were, not many shops wanted to work with us. We only got two. One was our former provider. And one was a company called Ground Wire, who went under after they had kind of got us ready to launch. And then some of the Ground Wire folks became per-cleric consulting, and they brought us jazz cartes. So those are two kind of wonderful partners that shepherd us through this whole process. They helped us identify who our superpowers are. Our volunteers were probably A number one. Courses and trips are what attracted the most people. Our general community outreach and kind of respecting the community of we're the ones that know how to do this safely. And our content, our books, huge reach all across the world. They helped us prioritize what was most important to do and in what order. I would almost put simplify first, because we are pretty complicated, and that's quite not the most important thing any of us could do. But they said we need to create great online experience around our courses and trips. It's good looking content, images, text you could format, make things look good. Not just words on a page that blah, blah, blah, blah, don't look all that great. Offer great content for our potential members. We got our great books, but we really don't have a good online presence. Pages didn't look that good. It was hard for, you know, I had to update them. And one other guy in the shop who knew HTML, two of us having to update all the content on our website is impossible to keep up with, have anything good out there. Forever changing spelling and grammar and not actually putting out new content. We wanted to build an easier on-ramp for potential members. It was really hard for somebody to join. If they had like two, three pages worth of junk to fill out to even become a member, and most people would probably stop mid-page and that was it, would lose them. And we wanted to build a foundation for connecting members to one another. They really had no way to do that in our old website. The other thing is we had a little trouble reporting on things that matter to us. I was the only one in the House Who News sequel, and I was the only one that could write reports. The best I could do was maybe a little bit in Excel where they could run something and update it there and get some stuff. Otherwise, I was the only one that could gather, get data out of our system. And of course, like I said, simplify, but that should probably be first. Simplify everything. They helped us choose a good set of platforms to work with. Plone and Salesforce is a big combination. The other two things over there, how many know Salesforce? You know Salesforce for you? How many know Plone? Everybody's hands, come on, all of them. Stripe is a payment processor. How many know Stripe? Anybody? It's been wonderful. It was great, easy to integrate. Anybody know Acumen? No. It's very specific, David does. Very well. My, my dad. It's very specific to the book publishing industry. It's a great system for warehouse inventory, online sales, author royalties, account, the book accounting practice, all that kind of wonderful stuff. So it's a little hard to use, but we have kind of integrated into our Plone Salesforce Stripe platform. We play a lot of poker. Agile development, that's another thing that Jazzgarda and Perkley have brought to us. It's a wonderful, wonderful tool. As our project progressed, we could make changes on the fly. We could decide, okay, this is not going the right direction. We could change that. If we took something, we thought this was really easy and it was harder. We could spread it out and move things around. So that was a wonderful process. We've been doing that for three and a half years now and it just amazes me how easy it is to use and how flexible it is. This is kind of a rough schedule. Up when we started the project, there's a little pre-discovery with user design, visual theming. Our website launch was 20 iterations and we've probably done, do you think we've done 20 since then, David? Probably not quite. Maybe 15, 14, 15? A dozen. Yeah, at least we've done at least a dozen since then. So we've added a bunch of features. Data migration and testing, of course, through the whole process, the end testing was, we actually got volunteers to test for us. So the last couple of months was really testing, lots of bug fixing. Data migration was kind of a big thing for us. The team did a wonderful job of giving us a repeatable process based on transmogrifier with a UI where I could export all of our data with SQL, have a nice CSV, upload to the website and there was our data. And that was important because we have lots of data to migrate and we had to update that. We have one of our probably big things is the routes and places database that help us prevent conflicts. We're scheduling activities and we needed that to also look on the website, but of course it had no images, no good text. It was just a title and a wee bit of data. So we needed to get those in our new system and have a bunch of volunteers actually adding content, adding photos if they could, adding good descriptions, making sure all the data was accurate, getting rid of duplicates that existed within that system. So that was a wonderful, wonderful tool for us. Of course we use GitHub for our code database and for tickets and David and I have a pretty good system worked out for this is urgent get it done right away and every month we send them a top 10 of things to work on. So that's been a great help to us. And of course ScrumDo's what we use for agile development, all of our user stories, how many, anybody ScrumDo uses out there? It's a good, I think they've upgraded since we're, they're different what we're using now. Yeah, okay. I have a couple of things weren't the best UI but I found ways around them. But it's a great way to write out our project and decide what we're going to do when, if we can easily swap stuff in and out. And one of the things I have discovered best about this process is now that we're beyond launch, I spent a little time well before any plan iteration with the developers to actually estimate all the stories to make sure we can get done what we need to and if we can't, we rearrange things appropriately. We also use Google Analytics. Kind of helped us inform our website design and we also recently added a mobile theme. It helped us decide what was most important to do first there in case we couldn't get everything we wanted. And we also, as you'll see soon, we use it to kind of inform our most popular blog, Portlet. Okay, so let's dive into what our website looks like. It's beautiful, easy to use, that little one right there, that's a mobile homepage and behind it is our regular homepage. We got a beautiful, good looking, well-working website. Alex said we just launched a mobile. Some things to note there, we also upgraded to SAS to make our CSS easier to upgrade. And also, oh, Dave reminded me, what was it called? Autoprefixer to put in browser prefixes. So those are two things I think helped the developers make it easier to update and work better overall. We also deploy our website onto web, Amazon Web Services using their ops works. We use some archetypes from Plone 4.3, which is our level of clone. At some point, I'm sure we'll upgrade to five. The two big things we got out of it that we really needed the most were these blog posts and pages where our staff, our volunteers can go in, update content without me sending anything to me. It's wonderful to be at a cinema desk and not have a bunch of requests. When I need to fix this page, I need to add this here, can we put this info there? So this is fantastic. That most popular blog, Portlet, David wrote a wonderful little tool for us where it pulls from Google Analytics the number of times this blog was viewed in the last week. And those are how we determine the top six or so blogs there. We of course wanted our members to interact with one another. So we wanted to have a member profile where you can have a picture, a biography, some info about your favorite activities. We're also of course being conservation oriented in business and carpooling so people can put up default carpooling information so we register for activities. It's kind of facilitates them organizing themselves. Oh, let's see. Anything else here? Oh, important. This is a custom dexterity content type but it's connected through membrane to the user archetype. And we did that so that we could actually do facet searches and have a member directory, a volunteer directory, that kind of stuff based on the contact records. And that's me by the way. Okay. The other thing we got were badges. In our old system, we're kind of scattered all over the place. We have six branches all over the Puget Sound and a lot of them like to call courses different things and we get lots of questions from people that were close to a couple branches going, how are these two things different? This is our kind of one way to say this is a basic climbing course. This is a backpacking course. This is what you learn when you finish this. You get this no matter which branch you get at. So this was a big thing for us too. And of course it's content rich. We've got these nice badge images that look great when they're big on a page or small in somebody's profile. This is what they look like in my profile. So we got some course badges and leader badges there. And that it also tells people what our skill set is. What have I learned? What do I know? We also have some award badges to have kind of over volunteers or members who have hiked so much or climbed so much. So it's another wonderful part of our system. Faceted search. We have a bunch of faceted searches all based on the EEA facet navigation and solar. Our activities, routes and places, trip reports, courses, clinics and seminars. And structure opportunities so we can get volunteers to help teach our courses when you need them. A volunteer search, if we're looking for somebody who's got a special skill set, there's a way they can specify what their skills are, what their interests are, and we can go find them when we need people. And also our site search. We turned that into a faceted search. We could actually make it easier to find all of our content. It's kind of different than, what is it? The standard? Plone searches? It's just a normal search. We replaced that with a faceted search that's working quite well. We also added some sorting capability to that too that will probably add to our other faceted searches. We put it on a map. So all of our routes and places and activities and trip reports have a way to add a point or two on a map and we can search. We can do searches that show results on a map. We used a thing called product.maps. It's a Google Maps add-on with a little bit of customization. Here's a look at, if somebody does a search, it comes to a course page. This is what it looks like. What's on the right there is what you see at the bottom part of the page. It's for a learn to belay course. This is a great way to give people just quick information at the top about when this course is, when it's happening, nice image to the right. And then that bottom, we call tab nav area, it lists the course requirements like that's all of the lectures and field trips and things they needed to do to complete the course. There's a roster which we'll see an example of a little bit, any required equipment they have, and then course material. So there's some handbooks, student syllabus, quizzes, whatever. We can put all those there. Be in-plone, it can be a file upload, it can be an image, it could be a link out to somewhere else. We're looking at working on e-learning platform. It will probably somehow integrate through that where they can do bits and pieces there. We also wanted a shopping cart. In our old system, it was really hard to buy stuff. It was literally one at a time. You couldn't put a bunch of stuff in the cart and check out. So we wanted somebody to be able to join, sign up for a course, buy a book, anything they wanted, one transaction. So that's where Stripe helped us out and all of that's integrated into one place. So there we got a membership, a book, and it's learned to play a class all in one cart and one checkout process. We also made a really wonderful, flexible promo code system that helped us simplify. So we had like 20-some membership types. We crunched down to six and we used some promo codes for other things like out-of-state membership. So if we want a low-income membership, we can give them a discount, that kind of stuff. So this shopping cart was another big, wonderful thing that helped us simplify and make our members' lives a lot easier. Rosters. So once we're signed up, we got a bunch of people on a course. Our leaders can go in. If they need to add somebody like an instructor or somebody's taking the course for free or somebody hiding the course, they can add them themselves. They know who their leaders are. They know who all is registered for the course. They can do things like send an email to selected people. They copy email addresses. They want to use their own email client. If they need to update stuff when it comes time to graduate people or whatnot, they can do that. And then we also have a download capability that they can actually download something to a CSV if they want to do more detail analysis than we offer on the website. So we give them that ability to download some data. And of course, in the rare chance that we got to cancel the course, we have an option for that too. But rosters are just for leaders. Remember that tab in every area I showed you earlier? We also have a roster on that. This is the view that people, when they're registered for a course or an activity, can see. They can email each other. People can make their profiles private, in which case they would show up here in name only but no link, no way to email them, that kind of stuff. So we have keeping privacy under wraps. This is a great way for members to interact with each other and either get group gear made, to make plans for socializing after the activity, organize a carpool, that kind of wonderful stuff. Trip feedback. We wanted our, you know, if we focused on trips and courses, we'd give our leaders feedback to help them improve. Let them know what they're doing well, how they could do things better. We use Plone Form Gen for this. Pretty much as is for some things. For this particular one, a little bit of customization. We have put some other forms up for people to actually send us information like, I'd like to become a leader. Here's my info. Or I've already taken that stuff. I'd like to get equivalency. I don't want to have to take that course. I already know how to do that kind of stuff so I can get a badge that way. Simple forms all the way to these that are more complex. And part of the reason we did that customization was so that we could aggregate the leader feedback and show it to the leaders, their committees on a trip by trip basis. Leaders have a kind of a my feedback page. They can see all of their feedback for the courses they lead, the trips they lead. And to do this, we used the Save the Content adapter on Plone Form Gen. Okay. Community is one of our big things. We have, like I mentioned earlier, branches up and down the Puget Sound all the way from if anybody's filling with that area. Bellingham's probably farthest north, out to Kitsap, down to Olympia, across the water to Snow Comey Foothills area. We've got youth outreach programs we've talked a little bit about. We've got two program centers, one in Seattle and one in Tacoma where we can put on our, where our staff, our house, and where we also do a lot of our kind of special events and a lot of our lectures and some of our training courses. I would probably say our Seattle programs, both our program centers are kind of unique climbing outdoor education facilities in the country. So we can actually do a lot of our stuff there that we used to have to go into field for. And of course we have some outdoor centers, some ski lodges, Upper Al Mt Baker and Stevens Pass and, where's the other ones, the Kitsap, Snow Comey Pass, Meany, Meany, or Stampede Pass area. Oh, let's see, nature thing. So for a lot of our special events, we of course want to put them on a calendar. And to this, we used FTW Calendar for displaying events. We can actually put events, activities, and courses, which are all dexterity content types, onto the calendar, which is a wonderful way for smaller branches to show, here's all the stuff we're doing, so our calendars don't look so stark if they only do like one or two events. And then for other calendars, like our program centers where we've got tons of stuff going on, we can say these are just events. We're not going to show activities and courses and all that sort of stuff here. We also wanted to make it easy for people to give us money so they don't actually have to get through a true, like put this in my shopping cart kind of a deal. They click on a donate button, they come right here, they can, don't have to log in, don't have to create an account, they can just give us money. Of course, all this stuff is going to be synced to Salesforce and we can do some de-duping there if people already have an account or already have a record, they could give us more than once and we could actually get that recorded for them. So eventually we have Salesforce. How do these two things work together for us? One of the things we do, we mention donating, of course we have campaigns and different funds and we wanted a way to kind of keep Salesforce and Plone in sync with each other so we can actually pull, and Salesforce is really easy to work in terms of setting up campaigns, setting up funds, much easier to do it there than to build a whole other thing on the Plone side and do it. So we use Plone just to pull that information in so we can use it on a Plone donate page. We can say, anybody that donates on this page, it goes to this campaign or this fund or these, this combination of campaign and fund. So that's a wonderful tool for us there. We sync a lot of data between Plone and Salesforce. Plone is kind of our main data generator so it's an immediate point. Somebody buys something, signs up for a course, Salesforce right away. And we use Celery for that, is that right? I think last count of what I looked at, we have about 1.5 million records in Salesforce between all of our kind of content types. So that's a lot of stuff. We also, because we work in Salesforce, mostly our kind of our fundraising team works there. If they need to update somebody's contact information, we want to make sure that got back to Plone. So we, four times a day, we actually send just contact info like name, address, email, phone, that kind of stuff back to Salesforce. Back to Plone. Or, I'm sorry, back to Plone, from Salesforce back to Plone. The nice thing is once we've got data in Salesforce, we can set up a workflow to trigger an email to that person. So somebody signs up for a course activity, we can send them an email. And the nice thing about doing this in Salesforce is we can automatically, because of Salesforce what it is, we can go in there and customize. We write that email, we can change the text, we can change the spelling, we don't have to ask Dave to do it for us. We don't have to have some special UI where we can actually go edit emails that we're sending. So that's a great, wonderful, wonderful piece of using Salesforce. So a little bit about how we use Salesforce beyond those transactional emails. Perkletor helped us build a engagement, what we call engagement pyramid, helped us understand kind of who our audience was and how they progress our organization. So people that just kind of find out about us are there on the bottom and our visionary, you know, our CEO or board president, people that have really come through the organization or have great views for what we do or go or all the way to the top and everybody in between. That kind of helps us figure out how to communicate with those different groups of people. You're going to talk differently to a board president that you do to somebody that just walked in the door. So part of that is email newsletters and a welcome email series. We have a handful of emails that go out every month, one called RouteFinder, just general information about the organization, what we're up to, what we're doing. We have conservation currents, which is specific to our conservation and stewardship activities. We have activities and events to let people know what we're up to in terms of, hey, it's hiking season or we got the film festival coming out. We have leader lines, targeted our volunteers to help them learn how to be a better volunteer, what opportunities available to them. And every branch has a monthly e-news that goes to just their branch members. We also have a few series. We have Welcome New Guests, Welcome New Leader that gets three or four emails over the course of up to six months. We have Welcome New Leader, which is kind of a one time, like a follow up a couple of months later email. And we have a opt-in on the plone side that gets synced to Salesforce. And then Salesforce, we use a thing called, it's their marketing cloud, used to be exact target, but now it's more and more integrated with Salesforce itself. But we can basically sync that opt-in. So people opt, they go to their plone site, they go to their preference and say, I don't want to get that anymore and that'll sync over and they'll stop getting the email if they want to opt in, they can. So that's kind of how that whole chain goes. The other thing Salesforce is great about is tracking and reporting. It's really easy to write a report in Salesforce. The only downside is you kind of have to know the underlying data schema and have some grasp of good data analysis to know what question to ask and how to pull the data. So I can't just have my kind of staff, you know, write reports on their own completely, but I can write a report now and say, here's your report. If you tweak this filter, you want a branch, you just change the branch there. If you want a different date range, you change it there and they're off and running. And there's some routine reports that are always going to be the same date range, same example, and it'll change based on like, look at last month, look at this month, that kind of stuff. So it's a great, great reporting tool for us to use. Made my life a lot easier. I write a report once, I'm done, they're using it. So I can actually write the same number of reports, but be a lot more effective because everybody's getting their reports they need and they're off doing stuff on their own. And when they come to me, it's because they need something special for a change. We also can put together dashboards and we've got a handful started, one for membership, which we have up here. The most important statistic there is in the left, we have nearly 12,000 members now. So that's up 10,000 about when we launched is what it was. Helps us, kind of, this is what we measure, what we care about. This is really a really good thing for us. We have a membership. We've got one for activities and courses. We've got one for our volunteers. We're working on one for donations, one for conservation. And we've also got one, every branch gets a dashboard about kind of statistics that are important to them. So it's a really good tool and we can put these out every month, they can see them. Of course, Salesforce is a license-based thing and we only have license for our staff. So any of these that need to go out to our volunteers, we just make a PDF and send it to them, eight for a month. And a big thank you to my friends at Plone, Sally, David, Carlos, Chris, Jim, Kim, Jesse. Where did the plone come from? Holy cow. Editing, where's my editors? At Percolator Strategy, Chris and Karen have been a huge help to us. Their Salesforce crew is Matthew, Nicholas, John, Barbara, Drew, and Kevin for HTML and CSS. Darrell, Hool is a wonderful UX designer and Neil, our graphic designer. And of course, our Mountaineer staff, board directors, volunteers, everybody helped make the site what it is, supported it, paid for it, all that wonderful stuff. And thanks to all of you who helped make Plone such a great platform for us to be on. At Armado, the end of the just the beginning, with tenured improvement, always adding new features, always fixing stuff. So that's who we are. And the ploneconf.sixfeetup.com to say how wonderful this conference is. Any questions? Let's start here, work our way over. Can you license this? Can I license this? It's funny you ask, but the Mzamas were beginning to embark on a web project and we were talking that very question. We were starting to talk about what can we do in that realm and we just haven't really got anywhere. I work for a nonprofit organization where we take kids mountain biking. It's an international organization. We have a chapter in San Francisco. Yes, talk to Sally. The gentleman sitting next to you is from WTA and they have a similar Plone Salesforce setup and I'm guessing they're, I've been to their site and I know it looks a little bit like ours in terms of we do similar kinds of things and have similar kinds of interests. And David has worked on there. I'm sure there's some synergy there that we're unaware of or maybe you're aware of. But yeah, it's a great job building what we need. And certainly Plone five with being all of the authority content sites I think will make some of what we did that was more difficult and challenging, even easier than what we did. So there's definitely an opportunity there. Yeah, I can also speak to that just briefly that just to say, I mean, there's definitely work that we need to happen to create a reusable thing out of this. But we've thought about that a little bit because we have started some conversations and there is some potential for that to happen. Okay, who's next? What kind of shake-off or shopping sort of thing that you've done? We've built our own, but we have done some work since on turning it into a reusable package for another client, which is in, well, I guess it's in our GitHub as jascarda.shop. But it is a little bit opinionated in terms of what it uses for like dealing with tax and with shipping and I mean, it's just using Stripe as the payment processor. Although there is some attempt to anticipate people adding different adapters to other services in terms of its scope, it does a lot of what the old Plone GetPay used to do in a lot saner way. I see Yens here and I know Blue Dynamics has a shop thing too, which I don't know much about. I would be interested to know how it compares. Okay. First I'd like to commend you on the beautiful website that actually went to the World of the World of Plone. Thanks. Well, we definitely decided not to stick with our current IMA system just because their upgrade didn't really bring us all that much. It was still the old data-driven thing. They added some stuff that made it look more like a website. Why we chose Plone, I was really not involved in that decision. I was handed the project to make it work. Sally, were you around for that? Oh, I was hoping. Part of it is like I said, we started with a company called Ground Wire who did our initial discovery and then Sally and the company did some rediscovery. Yeah, the Mountaineers did an outstanding job of really anticipating their needs for the future. Is that better? Yeah. Sorry. Mountaineers did a great job of anticipating their needs and not just thinking of, okay, we're going to spend X dollars this year and then we're going to ignore technology for the next five like so often happens. They really laid out a plan and they spent, I think, two years fundraising for this very major initiative as you can imagine from the website and also doing a discovery process with Ground Wire, which was a nonprofit technology consulting firm in the Northwest, Pacific Northwest here in the States that focused on environmental nonprofits. So this was exactly in availability. So Ground Wire had, they reduced to work for Ground Wire and they have a lot of expertise on the engagement strategy side like you heard and a lot of expertise on the CMS side. And the clone was something that they were particularly focused on but they looked at a lot of other things and it seemed like a really good match. Then Ground Wire went out of business and they had done this huge discovery process and all of a sudden their selected vendor was gone. So that was a really interesting transition point but because everyone had been so invested in the whole project, the people at Ground Wire felt very invested in seeing the project go forward. There was actually a number of the Ground Wire people, David included and some of the people who saw the pictures actually went out of their way to make the project go forward and Percolator was born out of that. And at that point they actually did another review of the CMSs on the market to make sure that phones still, because there was like a six month or eight month gap there, they wanted to make sure it was still the best choice. So they reached out to at least Drupal I know and I'm not sure all the CMSs they evaluated at that point but they really wanted to make sure that the phone was the best choice and it seemed like the trade off was sort of between what you got out of the box and the customization you had to add afterwards and it seemed like the phone had the best bang for the box of what they needed to do. The other thing that's interesting is that, I mean we thought long and hard about whether it even made sense to build this as a CMS project. I think in retrospect maybe you knew that more clearly than we did for a while. I mean it felt like we're going to build all this registration stuff, you know the rosters, that's all kind of custom stuff, the shop that we had to build on top of Plone and maybe it would have been a little bit more efficient to build that on a platform that wasn't a CMS. But I think in the end it really made a lot of sense. I mean the course and the activity pages, I mean the things that you're registering for do have a lot of content and there's a lot of things that are very tied together between the registration stuff and the content stuff. And also the membership model, the fact that they've got 10,000 plus members and they really needed a lot of those people to be able to have profiles and a large number of people to be able to actually author the courses, all that kind of stuff really made sense. I think we went in with our eyes wide open that we were such a complicated organization with so much breadth and depth that no matter what platform we chose it was not able to and no platform would serve all of our needs and we're going to have to heavily customize whatever we got. So it's kind of picking what was the easiest to customize that had most of what we need. It was, you know, we could handle that kind of customization and I know Drupal could probably do it but it sounds like it's harder to customize for some of the stuff we needed. And, you know, it's, I mean as a flow user I can say I think it's a wonderful platform. I haven't worked with any other CMSs so other than like WordPress which I know would just come close what we need. But yeah. Is there another question anymore? So thank you, Jeff. Yeah. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
The Mountaineers is a 110-year old nonprofit focused on exploring and protecting the outdoors. They have very broad and diverse groups of users - the general public, guests, members (adults and youth), volunteers, donors, and staff - all with competing technological needs and challenges. Jeff will show how a combination of the Plone CMS and Salesforce CRM help The Mountaineers meet these challenges.
10.5446/54089 (DOI)
Hello everybody. My name is Eric Barrow and I'm going to talk to you about JavaScript from Client, which I've been introduced by Eric in this keynote this morning. Yeah, I know the word here which is kind of boring is JavaScript, right? So I made a small poll on Twitter, I asked, yeah, recording this 3C form in GIS sounds smart, stupid, or seriously? Well, first reaction was, yeah, seriously? How are we doing that? Well, I know a lot of people are complaining about modern GIS is really too complicated, right? It's like changing every time you have every time something new to learn and the process to build even a very simple thing you could do with two lines of jQuery is like crazy and the tooling is crazy as well. Well, that's actually true. I cannot deny that, okay? But I read a blog post like a week ago or something and I was talking about the JavaScript fatigue, right? Because everybody's complaining about JavaScript fatigue every two weeks or so you have a new package error, you have a new framework or so. And so we do experiment with this JavaScript fatigue, but there is also the JavaScript fatigue, right? And the guy was saying basically that's just like when you cook, well, it takes time, you're going to prepare something, but you prefer to do that instead of eating raw meat, right? And it was kind of fun and that's actually quite true. So I can talk about a French recipe. Do you know that it's Blanquette de Vaux? How many of you know about Blanquette de Vaux? A few of them? Yeah, okay. Well, that's something really famous in France and I do cook that from time to time for my family. And actually you need some medium-sized cubes of veal, right? You put lemon on it. And then you need lemon because you're going to boil it for a long time and if you don't put lemon, the meat is going to turn red or brown and with lemon it's going to keep white, okay? And if it's not white, that's not Blanquette, right? So that's really important. And you will need also a few carrots. You will need lorat, rosemary. You will need one onion, but you don't cut it. You just take the entire onion and you pin a clove on it. That's really important detail. And you're going to boil really slowly for one hour and a half, okay? But 20 minutes before it is cooked, you need to take a small amount of the juice and to blend it with flour and cook it with a strong fire in order to make something like a bechamel, right? You know, probably bechamel, but this is going to be the actual Blanquette sauce, right? So when it's all done, you can serve it together and it's really delicious, okay? So that's how you actually do Blanquette de Vaux, right? You can kill a veal and eat the raw meat, okay? Wow. Two approaches. Both are valid, but I do like Blanquette de Vaux, okay? So that's what it's about with JavaScript. We do need the complexity because we are doing complex stuff and at the end we get something with worth it, right? So that's the thing about why we should be accepting that JavaScript is complex, right? But now, regarding Plur itself, developing a CMS in JavaScript, why would we want that, okay? Really? Well, reason number one, right now, the entire web is shaped by JavaScript, right? There is no way you can avoid JavaScript. You need it everywhere. You see it everywhere. But it's not just about the techniques. It's not about the development itself. It's about the whole thing. Even when you're going to recruit someone, when you're going to have some order from your customers, the JavaScript is there. You might find some excellent JavaScript skill out there, but it might be difficult to find a good Python skill. So that's something you cannot avoid and that's part of our system right now. But CMS features are still a must-have. You need that. We cannot ask the users to use, well, Postman to make some JSON requests on the back-end server. Of course not. We do need the CMS features. They are useful even if everything is changing, the browsers are changing, going to have HTTP and so on. It makes no difference. The users are the same kind of people. They need CMS. I know if you heard about Contentful, Contentful is a product. It's not open source at all. And they're offering an Atlas CMS, right? It's an API and you can use it with whichever framework you want. And something bites me. Their headline is like a CMS without the bad bits, right? That's how they advertise themselves. I have the strange and bad feelings that the bad bits here, that's us, right? That's Plon or that might be WordPress or Drupal. That's what they're talking about here. And they're advertising their stuff that way saying, well, you know about the old stuff, the ugly stuff, the dirty stuff, Drupal, WordPress, Plon, whatever. We are different. We are offering something better, something you're going to like. Well, that's not necessarily true. Okay. But the message is not wrong. There are some audience out there which are actually wanting this kind of new tools. And, well, for instance, if you consider WordPress, WordPress is a world-leading CMS ever, right? But do you think we can ever beat WordPress on its own market? We cannot, right? We don't even try. Okay? But that's not important. What we need to think about is the next market, not the today market. And, well, I'm not sure the next market is going to be at the CMS, but it might be. And what I know about it is that when it's going to be the actual thing to do, there will be no difference anymore between WordPress or Plon or Drupal. Whoever is going to provide the right API is going to be the one who is going to get this market, right? Because it won't be anymore about how the backend is done, but how the API behaves and what kind of really to use component, JavaScript component you're going to offer to people. That's what will make a national difference. Why now is someone going to choose, as to choose a CMS, he might consider that PHP simple, something I know, maybe I'm going to prefer that. But tomorrow, that won't be relevant anymore. It will be only about how you can manage your JavaScript development using this CMS. So that's actually really relevant, right? So, what I think is we do know about CMSs, right? We don't just know about how Plon work, how to develop the upstuff or Python. We do know about CMS features. A lot. There's a lot of knowledge in whole community about that. So why not building a GS1, right? Why not? That was reason number one. Reason number two is something you might have experiment with Plon 5. Building front end into back end is painful, right? Ask Andreas Jung. He would tell you something about that, right? You have been playing maybe with bundles, with mockups and so on. And the whole idea to have all this stuff which is offering good and modern practices to build JavaScript stuff, but to put that into back end somehow, to have this connected to the registry and so on, that's painful, right? Why is it? Because that's not the natural way to go. Let's do front end the front end way. That makes sense, right? So let's forget about all this thing having the front end shared between back end and front and let's build everything for the front end way, right? That's actually good architecture. So what are we talking about? While this, NPM install Plon client. Maybe you should have said yarn install Plon client maybe. But yeah, the thing is don't type it, it won't work. It's not already on the NPM repository. It could be, but it's not. The Plon client is going to be the top part of this slide. This is the slide from the Rime on Talk and it's going to be the part on top of the API which provides everything about viewing the website, using the website, managing the content. Everything the user is going to see is going to be Plon client, right? And everything under that, the business model, the data model, the contentful and so on, are going to be managed by the server, right, through the API. So that's about handling the seeming, of course. What seeming CSS and GIS sounds quite okay to be managed in JavaScript in front end technology, right? But that's also that's really form or form lead. That's also templating, viewlets, portlets and all the fancy feature we can offer to the content manager, right? So the person going to manage the content so that could be covered by Mosaic for instance and everything you might imagine regarding configuring the website. So that's a lot, right? That's not that easy. What we are for now is something which is working on top of the API and the good thing about it is that it does work right now on Plon server which have been introduced by Raman, but also on Plon 5 with PlonRace API, okay? The same code, exactly the same one, going to behave the same way. And you might find the repository, you can play with it, it's actually working. It's in GitHub Plon, so it's not in GitHub Pyrenee as mentioned, Raman considering regarding the server because it does actually work with Plon 5, right? So you can use it the way it is and it can be actually a good way to play with it and see how you could use Plon server as well instead of Plon, why not? It actually doesn't make any difference between the two of them because of the API, that's all the point of having a REST API of course. So now, how is it done? It's built with Angular 2, okay? Angular 2 is quite new, it's been released this year, well, lately, I think it was in September, yeah, September. The releasing have been quite painful but now it's a really nice piece of software and something you can have a good feeling about, I think. It's based on components, right? So everything we might have referred to as viewlets or portlets are actually components and they are pure client-side components. So you can see the kind of code you're going to have. This could be your main template, basically you can have a Plon header, a Plon navigation, a Plon toolbar, Plon content, everything going to be a component, an actual component you would use as any HTML element, right? And this comes with a logic and some calls to the backend API, right? So that's quite clean, right? It works with registries, so we'll be able to have registries are there, we have implemented systems which does reproduce the same mechanism. So you can register a view, you can register a component, you can override an existing component with yours, this kind of thing which does work. It's not as rich and flexible as it's the SEA for now, but that's basically the idea. This was not provided by Angular 2, okay? Angular 2 is a framework and the people who are building it are focusing on, well, regular project, of course there are, and CMS is not a regular project. Making a CMS is like a big thing because when you have a CMS, you need to be able to build any website with it, while when you create an application, a JavaScript application, you are just targeting one use case, right? And so we need to provide something which is pluggable, customizable. So it means a lot in terms of architecture. So we are building some key things that we know does work for CMS because we have been doing that for a long, four years, and we are just using the same principle and turning it into Angular 2 mechanism, right? So we do have the registries. We do have something different than JavaScript as well. I know that Python developers are really reluctant to use JavaScript because it's kind of ugly, it's something you don't feel good about. TypeScript is actually really, really interesting language. It's nothing like CoffeeScript, for instance. You might have known what was CoffeeScript, it was basically changing the syntax of JavaScript. It's a superset. It's a superset of JavaScript ES6, all right? And what's really good about it? Okay, it's going to imply compilation, right? Transpillation, exactly. But you need it anyway when you're trying to do anything using the modern approach of JavaScript. You're going to need it for SAS or less, for instance, that's exactly the same for JavaScript. So it's not a big deal. And what you get in exchange of that is you get something really interesting to use. You have types, of course, that's TypeScript, so everything is typing. You can import stuff. So you can import a module. You can import something from an external dependency or something from your inner code. You just do it the exact same way as you do in Python. And you can forget about these crazy things like re-cry.js, which is actually useful and does work, but which is really ugly to write. And you can also forget about what we had, for instance, with Angular 1. All the dependency injection was really verbose. Then you just have a main module in Angular 2 where you're going to have all your injections performed, and then everything is going to be provided to the rest of your application. So all your code base is really clean. You just import such class from such file, such module, and it works exactly how you can expect. So very much like Python does. Of course, you have classes also. You have inheritance. You have decorators, which are really powerful. It makes all the things very readable and maintainable. Really, it's impressive. So what do we have for now? What do we get for now in PlonClient? Well, first thing, we have a basic Plon UI. You have seen the demo, kind of demo from Eric this morning. It was kind of crappy, but we were working on it yesterday, and it left at 6 p.m. and it was really too late. And we have everything fixed a few hours later. And yeah, he had to work on his slides, blah, blah, blah. Anyway, so now what do we have? We have all the elements of the PlonInFace componentized, right? So it's mean navigation, login, breadcrumbs, the fooder, search, toolbar, et cetera. Everything has been componentized. It means that's our elements you can use, but into a template, and it's going to work the way it is, right? We have a theme, so it's a classification of Barcelona. So that's quite straightforward. We took Barcelona to convert it into SAS, and that's how it works. The important thing about the theme, it can be overriding at build time. That's mean the theme is part of the Angular 2 application. So in a component, a typical Angular 2 component, you're going to have in the decorator of the component class, you're going to indicate which is your style. So you're going to be a SAS file, and which is your template. You're going to be a HTML file, right? But that's something that can be overriding when you build your actual project. Webpack is able to use preloaders, and preloaders are aware of any resource you are trying to get when you are building your bundle, and we have implemented a preloader for plant theme. And the thing, it's actually working very much like the portal skin custom thing. You're going to take the precedence on any resource which is named the same way and located at the same location. So if you are, for instance, logging.html somewhere, and you put in your custom folder, logging.html made differently, then automatically, Webpack is going to get that one and use yours instead of the different one, right? So that's how it works. It's not very... It does work. The principle is here is not something we have been testing a lot. For now, we are just working on Barcelona, right? I haven't experimented an actual usage of plant client with a completely different theme. I don't know if it works, actually. But the principle, the principle that you can override thing is easier, okay? We have server-side rendering. Wow. What do we need that? We are building a plant client. It's going to be running on the client side, I mean, any visitor of your website, go on a single-page app, basically, and it's going to get information from the backend and render it dynamically using your components. So that's probably enough. Well, the thing with Angular Universal is it's going to reproduce exactly the same behavior but on the server-side and serve really static pages to your users. It can be useful for first-time rendering, for instance, to improve the performances. It can be also useful for referencing somehow, for Googlebot, et cetera. And it might be useful in any case. So, for instance, if you don't want to offer a non-JavaScript enable website, that's also useful. So that's something which is there. We have a form generator, so that's basically the equivalent of what we had with Z3C form. As Timo explained, it's based on JSON schema. JSON schema is a standard format for schemas, right? And we have extended it because JSON schema is just about the data, right? So it made no difference between, like, a text area and a text input or a rich text. This is all string, right? This is about the data, how it is persisted, okay? So we have extended JSON schema in order to be able to declare everything we need about the widgets, right? About how the form is going to be, not necessarily the data. So it's quite advanced. It's working quite nicely. We had to develop it ourselves, unfortunately, even if it was quite kind of fun. There were some things really interesting in Angular 1, Angular schema form. And I've been using this module for a long time, and it was really, really good. But people there were not migrating to Angular 2 at the time we were, at the time we start plant clients. So I decided to start to our own thing. But I have been in touch with them, as I might be interested in collaborating. So maybe it's going to be the next version, I don't know. But for now, what we have, it's something very rich with all the different widgets you might need. You are able to register new widgets. You might also register new fields. So that's pretty much what you can do with plant. We have tiny MC. We have master slave. So you can have, for instance, a checkbox. And if the checkbox is checked, then you have a part of the form which is automatically displayed or hidden. We have data grid. So you can repeat, a sub-schema can be repeated as many times as you want, and you're going to create a data grid. So that's the kind of feature we have, which are pretty advanced, right? And it's all customizable, right? And this is a separate project than plant client. Plant client is using it as a dependency. It can be used elsewhere, and Roman is going to show you that. We have views. So we provide the basic views, like add, edit, view, search as well, log in and log out our views, and we can register views, right? And those views are actually processed by the client, and we traverse to the, we actually route to the right content in the IPI. The routing is not totally fine. I know I need to improve it, but it's actually work. So you can have any page of your website and add, and you're going to have the add view or edit, and you're going to go to the edit view, just like you have on your actual plant site. And this is something which is not common with AngularJS or any other framework, actually, because most part of people are focused on routing, right? That's how you do a single-page application. We don't. We cannot do that. We are running a CMS, right? We cannot say people there. All the pages are going to be in slash pages, blah, blah. No, we cannot do that. Of course, we don't want that. So here we had to imagine the way to manage that properly. And it goes, I mean, it's totally doable and does work nicely with AngularJS too, okay? But it's something you're not ready to use, basically. It's not part of the regular toolkit. So I'm going to show a demo. So that's Raymond computer, because I had some issue with mine. Yes, here we go. I need to log out, I see, first. So I have no idea if it's going to work, because I was supposed to run the demo of my machine and I just can't connect to the protector. But yeah, I'm quite confident. So that's what we have for now. So you can recognize Barcelona, right? This is the home page we can navigate. So you can see it's not reloading the page. It's just calling the backend to get the content. Yeah, I don't know what's the actual content. But anyway, so that's how you get as a visitor. We can log in. Do you know the login? It's admin, admin? Okay. So now we're logged and we get the toolbar, right? So I can go to, I can add a new folder, for instance. I don't have folders. This is a folder, I guess. Yes. Okay. I'll create a new page. Problems is a little click. No, well, it's not working. No matter, I'm going to go back to home and create something and just edit here. So here's where we get the form generator. So everything here, it's kind of messy, right? Because it's not ordering the stuff properly, but it's actually getting the different field set from the schema of the current content, which is a document, and rendering it with a different widget. So we have tiny MC here. Well, it's actually not a lot of version. This is a broken version. Anyway, so we can go back to view. Yeah, no, we cannot. It's all broken. So I don't even know if I can change the workflow. Maybe I can. Yes, I can. So it works. We can search also. So here you have results. So this is basically what you can do with it for now, right? It's not much, but it does work. Well, some part of it was not working in this demo, but that's something which is fixed. And yeah, it actually implements all those stuff I've been demoing or trying to demo. You can create new contents and new content in new contents. I mean, it does route, it does match the right rendering to the different view you have. It does everything you might need to actually manage content. It does not do anything about configuration, for instance. It does not allow to manage what could be types, for instance, with a mosaic equivalent. This is not done yet, but that's something we're going to do. So here is it for the demo. I don't think I need to show anything else. No, that's it. So go back to my slides. Okay. Yeah, now another demo about something different, but based on the kind of the same thing, the clone manager is something Raymond is going to demo. Well, the main idea is that we've been trying to do the clone client, and it's definitely really a lot of work. And I think that it will be, you see, doing what Eric has shown you, it's been like a lot of months dealing with how we deal with this kind of automatic generation of things that we don't clone in JavaScript. I think that we are going to focus on having more targeted applications that are built on top of the clone client, the REST API. Otherwise, if we try to create all the same that we have in the clone in JavaScript, we may end crazy or something worse. So this is an example of this idea. For instance, last week where I was thinking I should go to the clone conference, I would like to show something that is visible about the clone server, because as you know, as Eric said, how do you make people say, oh, wow, if you show code or terminal, it's really ugly. I was thinking, why don't you use what Eric has been working about this schema form, library, and Angular 2 material, this cool new stuff, and Nucleus on the rock, and Electron to create like a really prototype. So it's a prototype. It's really experimentally just to show the proof of concept, GMI, to work on a clone, on a desktop publication. So I had an afternoon, and yesterday Eric helped me to understand the Angular 2 schema form, and it's really nice, I can show you. So I need to switch from clone backend to clone server backend, and I need to stop the clone client, and I need to start the electron application of clone manager. So this is the, sorry, there is a lot of things here. I don't know if you can, yeah, I can do it full screen. So this is a simple electron application, so you can deliver that as an executable on Windows, Mac, Linux, whatever you want. It's built with Angular 2 and materials too, and it's using the clone client libraries for dealing with the schema. We are trying to create an API library using both ideas, so we have a JavaScript library to consume the rest of the API. So how it works, so we define here, I don't know if you see that, bigger, the kind of server you have, CMFplone, or clone server, you define the log of the server, the ZUDB, the site, the port, you can create new sites, you can define the authentication if it's basic, it also supports you to get the token from an OAuth server, and when you have all these things configured, you can go to browser, and okay, I'm not a designer, so sorry, this is the root of the clone site, where I get the sharing permissions, the local and the generic permissions with the roles of the users, and I get a list of content, so I have like different content here, and I can go to this one, and it shows me the metadata information, and I can go again to Paran, and I can go again to this document with all the sharing information, all the fields. And since yesterday, we are able to create a new document on this place, this is using the schema form, blah, blah, Blonkampf, text, whatever, save, and now I just got a new document inside my folder, it's called Blonkampf, and I can go there and just see it. This application also supports exposing all the registry information that we have, this is the case of the clone server, that it's a really small registry, there is only this option right now, and all the types that we have registered on the clone, so you can browse there. So, it's quite simple, use case to consume the API, building a desktop application to do whatever, so the imagination is the limit here, so we can create applications for the blog site, to create applications for dealing with a document, or a stock management, or whatever you want to focus on a specific solution and deliver binary applications or web applications that fulfill the needs, and I just want to show this idea. Okay, thank you, Roman. So, that's just a basic reuse of some components involved in the client, and it has been set up in a very different way, and it's sort of a different objective, so that's interesting about the flexibility of the whole thing. What we plan for the near future is to have first something like Mosaic, that's something that Roman already worked on, it sounds totally possible to do that quite easily, actually. We're going to reuse the same principle as Mosaic, we're going to save the layout as an HTML template, and we're going to render in this layout the different types we want, the fields or whatever it might be, okay? And this is something which is really quite easy to do with Angular 2. We also want to have a clip, so basically the ability to create a new plan project, so you're going to go plan, create my new plan, and you're going to set up an Angular 2 project for you with all the dependency you might need, and then you can start customizing stuff, creating new theme, creating new components or views or whatever. We also want to support traversal better ways, because right now we are using the default routing from Angular 1, Angular 2, and it's basically really stupid, we just are matching all the routes, so star, star, everything, to a single component which does parse manually the current pass, and then detect what's going to be the URL of the backend equivalent of the current object, except what's going to be the view name, and then render it properly. It's not how we want to do it, because Angular 2 allowed to create your own resolver for the routing, so we plan to have an actual traversing mechanism for Angular 2, and we're going to implement that. We also plan to provide a plan front-end API. For now, the plan client is just like a big project with all the components and all the interface. We're doing the same thing, and we cannot really depend on this and build a site from scratch. We want to extract everything which is about dealing with the backend, rendering the different components, having all the utilities you are made into an API, so you can then create your own project using this API, and your project is going to be really clean, really simple, just focusing on what you want to do exactly. That's something which we need to do, and we also want to provide very simple, reusable examples. We are thinking, for instance, like creating a blog website, where you can post blogs and display the blogs, having categories, kind of stuff, so something really simple, but that would be interesting because it can be reused, so someone which is totally not a plan developer can take it and create a blog and create all these Angular 2 things he knows about and have something working, and it's going to be also really interesting to understand how it works and how you can reuse and how you can understand, plan client, and get involved in the whole thing because for now it's kind of, as Raymond said, probably too big. We are trying to do everything Planned Destroy is actually doing. That's too much, probably too soon, so we want to provide something much cleaner, simpler, right? That's what we plan for the future. So that's it. Any questions? Thank you very much. Second, kind of simple, you were showing 4Ls and didn't have a hash of them. So are you saying that Planned Servers back in support of Station All 5 push back in? Yeah, it's using, yeah, it's not using the old way of routing using the Azure. It could be switched on or off. That's really easy, but basically nobody wants that. Okay, right. Second thing is, you talk about having a CLI, but for people who just want to consume it and make it very like what Raymond was saying about a specific app, the general CLI, and they want to use Angular CLI. Are you going to support that? Yeah, we're actually using Angular CLI, right? So what we want to do is something just like Ionic people are doing right now. You know Ionic maybe? Okay. They do have a clue, so you can really easily create a new Ionic application and then inside that you can use the NG client. Okay, so that's we are not replacing the NG client at all. We just are making the equivalent of Mr. Bob, for instance. You're going to initialize a build out and a titan eggs for you with all the rights that you need to have a Planned Project, what is going to be exactly the same with Planned Client with Angular, right? That's what we need to do. You mentioned TypeScript. I wasn't clear. Is that something that you introduced? Oh no. Or is it part of Angular 2 or where does it come from? Well, it comes from Microsoft. No, I mean in this context. Because Angular 2 is based on TypeScript, right? You can use Angular 2 without TypeScript, but that would be really stupid. All the good thing about it is you do use TypeScript to do Angular 2 and that makes it really wonderful. For instance, just declaring a component is just as beautiful and elegant as declaring a Planned Adapter, for instance. Just a few decorators and yes, I just know that's email, right? It's more a grok approach of the thing, but yes, it's really, really clean. And it makes it all different to me. Really. I mean, I really love Python and I enjoy programming JavaScript. It's always kind of painful to see how ugly it is. This is really different, right? And the good thing about it is it is a super set. So you can still write JavaScript thing. It's going to work. So someone with an actual front-end developer would be okay with that. But a Python developer would be okay with that too. So that's really good. I think it's a very good thing. Okay. If you have two clients connected to your back-end and one client, say, adds a page, will the other client see the new page without doing a reload? Right now, no. But Planned Server will provide a WebSocket. So that's exactly how you're going to work. Absolutely. So right now, WebSocket are here, but the Planned clients are not using it to get information back from the server when it's actually displaying something. So that's quite easy to implement because the whole idea of Planned Server is to have this server because we want that, of course. We do want that. Okay. Well, thank you very much.
A key part of the work started at the Barcelona sprint this summer was an extensible Angular 2 based app, which does all its interaction with the Plone backend through the new REST API. It supports both server side and browser side rendering for fast response time, SEO and accessibility.
10.5446/54093 (DOI)
I talked about time zones, but I had technical problems. So I was not able to start with this song, which was the whole point. So it's extra content. So we'll see. All right, looks like everybody's here. Right, I have to switch my glasses too, so I can see something. OK, so my name is Lenard. I haven't been on the Plum Conference since Arnhem. I could spend some time talking about what I've done since, but that would use up all the 40 minutes. So I'm not going to do that. Instead, I'm going to talk about what I've done the last half year for which I've worked for a company called Chewbox, which is here in Boston, although I live in Poland. And Chewbox is the one place where entrepreneurs, lawyers, investors, and board members can come together to generate, review, approve, and execute company documents. It's a system of record execution and engagement built to simplify standard corporate legal processes. And if you want to know what that actually means, you can talk to Stefan over there. But we handle legal documents and legal things for companies, hence the name Chewbox, which is traditionally where you keep all your documents related to a company. And at Chewbox, I've been mostly dealing with reporting. Because you want reports, you might want many reports, which office is selling the most yellow telephones, which workflow step are people getting stuck at, are we actually getting new customers at a rate that is higher than what we are losing the old ones. People all around the company have many questions. And you have a big database, either a big relational database like this one here. It's extremely normalized because it was designed by people who's taking university courses in relational databases. So it means that no data is duplicated anywhere, and you have several tables that contains nothing but a key column and a value column, and your queries become very complicated, have many joins. If you do one of the joins wrong, the query will take 10 hours instead of 10 seconds. Or you use a no SQL key value database, which may have some indexes, but not only things you need to index on, and there's no reporting tools for it, especially because you selected it as the brand of obscure no SQL database that was the database of the week when you started the project. Or because you're using such a fantastic and stable and old, amazing database that nobody else has forgotten which one it is. Or the typical case is, of course, that you have many databases with information relating to your company. So if you're going to make a report, you have to make these elaborate queries with loads of joint statements for the SQL database, and you have to write a little program that exports data to a comma separated file from the no SQL database. You have to merge them with a little script, and out you get a nice table and a nice graph out of the data after using some office program. And all is well, and your boss is happy, and your job is secure, and she actually wants a new report the next week, but then somebody has slightly changed the schema for the database. So your queries don't work anymore. But your boss wants the week's report. Now, in fact, she thinks they're so useful she wants it every day. But with all these querying and merging and manual changing you have to do of the reports, it actually takes you half a day to make the report, which means you don't really have any job over to make your actual job. And the query takes so long to run and loads down the database so much that the application gets slow and people start complaining that the internet is slow because they don't actually know the difference. And they complain to the network guy, and he gets angry at you and stops bringing you coffee when he fetches coffee. So how can you get your coffee back? Well, obviously the title of the report, the talk has already said what you need a reporting database. That's what, because databases are designed to be fast and useful for the software that uses it, not to making reports from it, which means that you'll have an index on last name and first name of all your customer, but you won't have an index of which region of Poland the customer belongs to because that's not what customer service searches on. But you need to search on that because you need to make one report per region. The reporting database is made to do reporting easy and it's not afraid of duplicating data. It can integrate data from several sources and it's often, even typically is read only, that is you load in the data in batch form and then you don't modify it after that. When should you use it? Well, if your transaction performers is critical, so you can't modify the database in any way because you can't make it slower or you have multiple databases or if the reports take a really long time to run or if you need to massage the data before doing a report. Massaging here means that maybe the internal field names and values don't actually match what the economics department want. We internally have a lot of companies in our database, we call them entities because they're legal entities. Maybe not what you actually want to type out. We want to have out entities on our reports because we know what it is, but maybe on some report it should actually say companies, things like that. Or the regions of your state or country might not actually fit the regions of your company so that you have to map over each customer depending on address to sales office. That's a typical data massage. And how do you do it? Well, it should be a separate database, should preferably even be a separate server so that it doesn't load down the production database. Updates need to be asynchronous, meaning that when you update the data in the production database, the reporting database should not be updated at the same time because that will slow down your application. It needs to be either an asynchronous call, so it's done by some other asynchronous worker later or which seems to be more common using batch exports. Data should be rebuildable, so you can drop the whole database and rebuild it. And it should use star or snowflake schemas. And that's schemas that look like this. They're simple schemas where relations are only between the central fact table and the surrounding dimension table. And the fact table should hold the measurable quantitative data. That means this is what you want to get numbers out of. You want to know how many for which quarter of something. That's a fact table. And the dimensions are the descriptive attributes. So for example, a fact table could contain a list of all sales transactions, while dimension tables should hold the information about the things sold and the customers, et cetera. In practice, it can be hard to know what is the fact and what is the dimension or what should, at least, what column should go in the fact table and what should go in the dimension table. But with a reporting database, always design it so it makes it easy to make reports. The important part in the star schemas is that you don't have relations between dimensions because then queries can become complicated again and you can get absurd results when you do an inner join and you should have done an outer join or the other way around. You can add another level of dimensions so that the dimensions also have dimensions and then you have a snowflake schema. I don't know how common that is. We don't have any use of that. And one dimension you have is a bit special called the date dimension. So it's worth talking about that. And that has to do with date calculations being both tricky and slow and not searchable. So you can, for example, not make a query on a date field saying that I want every Tuesday from the third quarter out and nothing else. So if you have a date dimension, which is a table that keeps all this information for every date, pre-calculated. So I don't know how much you can see of this. Actually, these are the fields that we have in our date dimension. We're probably not going to use a lot of them, but it's good to have there. So it's information like, yeah, which month, which day, which day of year, which weekday it is. But also display information. Like we have the field, or the column called quarter ordinal, which in this case contains the string 3rd, which means that it's easy to make a table where it says first, second, and third for the quarter when you're going to do a nice or chart rather. The ID, as you can see, is an integer because it needs to be an integer for fast indexing. But it should be an integer that is easily recognizable for humans. So I'm using the ISO date format, but in integer form. So this specific date, the 6th of September 2016, is then 20,160,906, which mathematically makes absolutely no sense. But you can easily recognize what date it is as a human. The basic procedure for updating the reporting database is called extract transform load. And that sounds obvious. You extract the data, you transform it, and then you load it into the reporting database. But it isn't actually obvious because there's extract load transform as well. That's used for data lakes, where you stick all the company data unstructured into huge databases, which you then do data mining on. But those kind of things are for when you don't know exactly what it is you're looking for. In this case, we know what we want to do reports for. We know what we want. We want reports. So then we should have instead what's called a data warehouse, and then you do extract transform load. Because the database should be easy to use. So the data should be in a ready-transformed, easy-to-use date. These three steps don't need to be done at the same place or at the same time or even on the same computer. And where you do them depends a lot on what you have for requirements. Extracting can be done basically in two ways. Either you do it in some sort of batch fashion. Every common is, of course, to do a nightly batch extractions. But those tools that exist, often called data integration tools, they can often do this quite often. They can do a batch extract maybe every 15 minutes, for example. If you have very sensitive data in your database, you have customer-sensitive data, the extract step should be careful of what is extracted. So that's kind of a half-transformed step already. If you don't have sensitive data, you can actually just dump all the tables out in raw forms and use that. But you don't want to have world-readable CSV files with sensitive data, such as if there's any Catholics that has bought French ticklers. And therefore, it's not allowed to be in multi-pital movies. But the big batch extract is the most common, as I understand it. And it's preferable if you can do a dump of all the data in a reasonable time frame. But that means you need to be able to dump all your data out maybe once a night. If you can do that, you can dump all the data out. And you can drop the reporting database, massage the data, transform the data, and load it. So you have a nice new fresh reporting database every day. You can do that in those cases. A lot of times doing such an extract is going to take too long. Our extract currently takes around two hours. But we're then hopefully just now in the end of a beta phase. So hopefully we're going to get more than 10 times more of the customers. And then extract would take 20 hours. And it's going to be kind of hard to do it. So we're not doing that. Or we could then maybe do one of these extracts. Well, in the end, we'll end up in a situation that we would have to run several extracts at the same time. So that gets to be a bit tricky. But again, using batches means you have ready-made tools. So that's a big benefit. And if you can't dump everything out every night, you need to be able to detect changes. It's called change data capture. You have to figure out what it is you're going to extract. And then if you don't extract everything, you risk that the data is inconsistent, that it's not actually the same as in production database. But on the other hand, you don't have to use up your main database so much. And change detection can be done in four ways. Time stamps so that everything that gets changed gets a time stamp. And then you just do a query saying, I want to export everything after the last time step. Sequence ID, if it thinks that gets added, or database logs. So you can add most many databases, have the possibility to create a log of each record in which each table that got modified. So then you can use those to then extract the right data. And the fourth way is to do an update on change. So you might remember that I said they should do the update asynchronously. They shouldn't actually update when the main application data updates. Well, you can, but you have to do it asynchronously. So you have to have some sort of event system. And again, databases, many databases do have event triggers. So if a record gets triggered, you can write something. If a record gets modified or added, you can actually execute some sort of code. Or your application might have events, but this typically means that you have to modify your own application. We are the ones that are modifying our application. And if you're running Plone, then you can modify your own application and you can make your own event hooks. So that's fine. But not all the time, not all the applications might have this. And the benefit is you get near real time updates. But you have to use some sort of asynchronous framework like Celery, which is what we're doing. So you get the information that something has changed and you just tell the asynchronous framework that this has changed. And then in the background, things get updated and exported. The transform step means then that you need to take all this data that you have extracted and make something that you can import into the reporting database. Worst case, you might actually need to load all the data that you have from a table or maybe two tables and then somehow do magic on that to get the data out. And that can be so much that you can't even keep the data in memory, which means you have to create a temporary database and do that for there, do queries from that. But that's kind of unusual. I think it's pretty unusual. Most cases, you can do the transform record by record. So you can do a simple script to transform the data and then maybe read in comma separated value files and then print them out again. And then you have the data that you want. And then you can load it into the database. This is so trivial that you don't actually really have to separate the transform and load. You don't really have a clear separation between the transform and load because once you have the data, it's just a question of writing it to the SQL database. And ta-da! You have a reporting database and you can make reports. But for that, you need some sort of reporting tool. We have looked at a bunch of reporting tools and this is my opinions of the ones that we have looked at. But we have only looked at open source reporting tools because we're a bit fanatic and I am about open source. So we haven't looked at any commercial ones. And almost all of the open source ones that exist are bad in various ways. They are definitely not really designed to be used by normal end users. And quite often you need to write your own SQL. But for the first one, you don't. LibreOfficeBase is the only open source tool that is aimed at normal mortals and it has this query editor where you actually can make SQL queries without typing any SQL. It's pretty good. Is it use? The reports, however, made in LibreOfficeWriter and it's by default horribly ugly. You can of course theme it as you want because it's LibreOffice after all, but LibreOffice is not that easy to make templates in. And I couldn't figure out how to insert a chart, which is something you want. So maybe you have to use then LibreOfficeCalc to make the charts, but what you can do also with this. But often you sometimes you maybe want to have the same in both reports. You want both a chart and a table and I can do that. A power user could probably use LibreOffice. Then we have BIRT, which is a plug-in to Eclipse. And if you think that it's strange to write a graphical user interface tool as a plug-in to a code editor, then we can only blame Emacs because they started it. So yeah, it's a plug-in to Eclipse. It's kind of slow, it's kind of weird partly because it's part of Eclipse. So it has a lot of menu items and things that makes no sense and has nothing to do with reporting. But it is usable and their default reports are prettier than LibreOffice. But here on the other hand, you have absolutely no help in writing SQL queries. You have to write them by hand. The last tool that we have looked at, it's Pentaho. And Pentaho comes in two versions. It's a community edition, which is open source and free. And an enterprise edition, which is so expensive that they don't even tell you how expensive it is. Which if you know from shoe shops and stuff, if you have to ask how much it costs, you can't afford it. So that's the question. So Pentaho Enterprise Edition is only really interesting for enterprises who have basically infinite amounts of money. But Community Edition is pretty good and the Enterprise Edition doesn't seem to have that many more features. The Enterprise Edition has a JavaScript-based report generator where you can create your own reports, which is a little bit more usable than the one that is written in Java, that is the desktop one. But it's also kind of almost a little bit more strange. So it takes a while to get into that one as well. So it might not be worth it. But Pentaho has a lot of other cool features. It's not only a report generator, which is what you're seeing here. It also has data integration tools. And so it's the whole thing for doing all of the reporting database stuff that you need to do. And the Pentaho Report Designer is maybe about as quirky as Birth is or Burt. I don't know what it's called actually. But quirky in a different way. The theming is probably incomprehensible unless you're already used to CSS. The theming isn't CSS though, but it works in very much the same way as CSS. And that's probably because this can generate HTML reports as well. So they can transform your theming into CSS, which is what they're doing in the background. So you theme things by having a long list of themes where you name actually what kind of field and in which place it is and that kind of stuff. And that's how you add theming. It also has a nice benefit because it has what it's called a business analytics server, which as I understand it runs the data integration that we're not using. And it's the bit that will run every 15 minutes or once a day or once a week, the batch export and transform and load stuff. But it also has a feature that you can publish reports to this database server. And then anybody with the password can go to that server in their browser and run the reports, which is very nice. It has a GUI query designer like LibreOffice, but it's not as good. So you end up having to type SQL anyway. I was going to make a slide here on libraries and frameworks in Python that can help you do all these things. But after looking for it, I can only find report lab where you can take your data and make nice PDFs. So if you want to do the whole batch thing, including the extract, transform and load and generating the reports, all batch during the night report lab is a good thing. You can do it. You can even then mail the reports to your boss when the report has run finished at 8.15 in the morning. And the boss will think that you're awake. If you know of any libraries and frameworks that can do reporting, tell me and I'll add it to the slide. So how do we then do this as shoebox? Well, we do it the hardest way we could think of because we have very unusual requirements or had, I should say. So our solution is unusual. We wanted near real-time data because we wanted this to be partly customer-facing because we had some reports, essentially, or data that were essentially reports that we needed to show to customers through our main application. And we wanted to offload that to a reporting database. But since this customer data is super sensitive, it meant we needed one database per customer that added some problems. And we also then liked that was not a hard requirement, but we definitely liked that you could run the reports as needed. And this means that we ended up doing the update on change, as I say, that we have, our updates are done asynchronously with salary in the background instead of by batch. And we needed one database per customer, although that requirement has gone. We're not doing that anymore. We decided to use Pentau. What helped us is that although we have high requirements, we also have an infrastructure that could easily handle that because we only have one source for data. We have two different databases, but it's just one application. So we don't need to merge data from different databases. And that application is written by us, so we can change it as if we like. And we already have events because this is built on the SOAP tool kit. So we're essentially using SOAP 3 or large parts of it. So we have there's modification events for everything already. So that was easy. And we already had salary set up to do other asynchronous tasks. So all we needed to do was to add a bunch of salary tasks to export the right data and then hook in events. To call these salary tasks. Now that was harder than it sounds, but we could do it. So our extract step looks like this. We have the events, normal SOAP events stuff, where we send just information to salary of what changed and when, what time stamp this happened, what time it has happened. And then you collect the data because we have super sensitive data. Because we can't just dump the objects directly out of the database. We have to choose what we extract. And then we make a REST call with the requests library. And that REST call is picked up by a server we wrote with Flask and SQL Alchemy and PostQuestQL that has a REST API that Stefan fixed and implemented. So it will then just do some minor transforming of the data and it sticks it into the PostQuest star schemas. We use Pentaho as I mentioned and a report with a chart looks like this. There you can see how many incorporations that has been done per month. And you can see that there's one in 2013 and that's the test data that comes automatically when you load a test data for development environment. And then we've got one incorporation, which is the yellow one here in August 2016, which is when I made the chart. So it's not very interesting, but it looks good. And it's themed to use the shoebox colors and the shoebox logo. And here's the column or table report. And here you can see also that up there you can select which company you should run the report on. So this is another feature of the Pentaho reports. You can publish them like this and you can actually have input fields. So you can let whoever runs the report say which company and which time frame you should use and things like this. And here is one example of the star schema. So we see we have here in down in the corner, we have the date dimension. And the fact here is facts about work items, which is a part of the workflow. So this fact table is what we're querying on when we query the table we saw before, which tells you which workflow step people have gotten stuck on. So you can see who is stuck on which workflow step. We use that for reports. And here's another one. And one thing you can see here is that we again used the date dimension at the corner this time and we actually use several of the other dimensions again. And in the blog posts and papers I found about star schemas and reporting databases, nobody mentioned this. So and definitely nobody said that you can't do this. So we're actually reusing the dimension. So our schema really looks like this. So it's much more complicated because this is five star scrimas that are overlapped on each other. But because each time you make a report, you only care about one fact table and the dimension table is connected to that fact table, it becomes a star schema and each query is there. They're easy, every query, even though this schema looks horribly complex. So I think you shouldn't really view it as one schema, even though we are reusing the dimension tables. We also have things like both an entity fact table and an entity dimension table. So when we do queries on entities, that is companies like the first chart on how many has been onboarded and incorporated, then we do a query on the entity fact table. Otherwise you have the entity dimension table, which are just then to display the company name basically in reports. And yeah, I forgot what I was going to say about that. But yeah, it looks a bit messy, but it really isn't because you should look at it like this, one fact table at a time. So we haven't had any problems so far with reusing the dimension tables. So yeah. So you have the ID from the dimension table and the fact table, for example entities, if you get in sync? Well, no. Well, yeah, we have internal IDs and those are in sync, but they come from the application database. But then you can also have just a key column and that's not in sync. That doesn't matter because you'll never query from the entity. If you have the entity fact, the information you have about the entity is already in there. But you could use the entity dimension table in the entity fact too. If you want to, if you want to make it a little bit more normalized, then they would be sick. Yes? You're not expiring your dimensions then? No, no we're not. So because you can't do that if you're expiring dimensions, if you have dimensional changing based on facts, then you can't share the dimensions. All right. Yeah, well, yes. I mean, the thing is if we do have dimensions changing based on facts, but that information always comes from the production database again, which means that it should be updated. That means that then something, for example, if the name of the entity has changed, then the name of the entity has changed. So then we change the entity dimension for that. So if you do a total drop in replace, you can do that. If you wanted to look at historical analysis of a dimensional change, you couldn't do that. No, no, right. So we can't extract information. If we would extract, we make a report on 2000, in 2013, and the company name has changed we get the new company name. That would be the reason for slowly changing dimensions. Yeah, yeah. Yeah, and the strategy of that is usually to just append on the dimension tables. Right. Yeah. Which we don't have any for. No. So this is an example of a query. This is the one you saw about the work steps where you stuck on the work steps. So the queries are very simple. You just select all the data you want out from then the work out in fact joined on the dimensions. And the benefit here is that for each of these five star schemas we have, this join bit, the from bit is always going to be the same. So you can always, you can just cut and paste it. And then I filter out what I want. And here you see something that's not SQL in the bottom there, which is why actually the nice little syntax highlighters say that this is wrong. And that's because this is the field you could select. So it gets stuck in there. Some problems we had. And here you can see that it's a lot of information here on this slide. And this is because I ran out of time. So I didn't have time to make lots of slides. But some information, some problems we've had is for example, how do you know if data is missing? Well you don't because we don't do a complete drop and replace. We don't really know if data is missing. We check how many entities we have in the application database and in the reporting database to see if somebody is missing. But we do not do queries on each of these entities to see that all the documents and everything is there. Because doing such a query would take about a long time to exporting the data. But we can resynchronize everything, of course. So that was one of the requirements. You should always be able to rebuild the data from scratch. So we can do that. And synchronous updates slow down the app. So we have asynchronous updates as we said, but they can come in the wrong order. This was a thing we also didn't think about from the start. Because salary can pick up the tasks in whatever order salary feels like. And this is why we added, so we have timestamps. So the event will start the salary task with saying what changed and when it changed. That means that our Flask server, when it gets the information, if the information is actually older than what we already have, we just ignore it. The only really big problem we had, that problem that actually took down our production servers is that the requests library is a little bit too good, but not quite as good as I thought. I knew that URL lib doesn't have timeouts. But requests lib is so good and easy to use, so I didn't think of that when it came to the requests library. Everything just works, right? It turns out the timeouts is actually not in any of these libraries. The timeout is the socket library, that said. So in fact, the requests also by default do not have a timeout, which meant that our salary workers slowly got standing and just waiting for the reporting database after we got connection problems in the reporting database and it stopped answering properly to requests. So that was an easy fix, but yeah, that's a little warning. We wanted multi-threaded server for speed, but when we implemented that, we got massive write conflicts that I didn't get locally. That turns out because the conflict strategy on Postgres SQL by default is one that is very nice and you basically don't get conflict errors because everything just waits for each other. So you don't get conflict errors, but for reasons I don't actually remember, the people who know databases better said we cannot use that in production and you should not use that in production. So we had another one and there you can get conflict errors and we got massive conflict errors. So we had to implement that, we had to fix that by implementing retries. So we now implemented retries both in the database server. So if you get a conflict error there, we just try again three times, we try. And we actually also on the application server and have retries. So if you get any sort of error that is not an internal server error, no, right, no, any kind of error except 400 errors of some sort, which because a 400 error basically tells you that you made a wrong request. So there's no point in retrying that. But all other errors including connection errors and internal server errors, we just retry because an internal server error is quite likely a right conflict. That means we now have up to nine right conflicts and the last sync that I did of all the data didn't have one single error message in any logs that I could find. So I think we got rid of all the right conflicts. I hope. That's all I have questions. The slides are available. You only need to remember slides.colliberty.com. Colliberty is my old company that I don't have anymore, but I still have to remain so I stick everything up there. So slides.colliberty.com and then you get a list of basically all conferences I've ever presented on and you have the slides there. Okay, no questions. There were some questions during the talk, which is okay, but nothing left. Good. Then I'm done. Thank you. Thank you. Thank you. Okay.
It's hard to make reports from NoSQL databases, and this is true for some SQL databases as well. And what if your reports need data from several databases? Then you need a separate reporting database. This talk will give you an introduction to reporting databases - what they are, when to use them, how to design them, and a quick look at some of the tools available. Lastly we'll look at how we implemented reporting databases at Shoobx and some of our experiences there.
10.5446/54107 (DOI)
Okay, so this, I did say that I wanted to change the name, but didn't make to the final program, but yeah, basically it's the same idea. So I'm Jill Furgada, I'm coming from Catalonia, actually working in Berlin, Germany. GitHub, Twitter, and I work at the AFRITE, that's a weekly newspaper, and we of course use PLON, but just a quick quiz, who has ever made a pull request on GitHub? Okay, no, so far. Okay, perfect. So do you know Jenkinsplon.org? No? Okay. So well, the few of you maybe have run this pull request shop that we have, Fulvio, yeah? Of course. So there's this package that we have on the GitHub organization called Mr. Roboto. Maybe ever heard about it? Okay, perfect. So everyone says it's like the ongoing topic, PLON is hard to contribute to. So once you have, okay, I have that, I know that there's the CSS error here, or there's the small Python, this is a small Python function that doesn't make what I expect, then you make a pull request. So if you find the package and the code and the branch and everything, so all the stars are aligned, you can finally make a pull request, but then maybe you have some problems on or some uncertainties, because maybe you are not so well-iterated with the PLON code base and how the project organizes and everything. So for example, in PLON we have kind of a lot of pride on keeping tests running. Some years ago, you could see Timo complaining and raging around about that. So of course, nobody wants an angry Timo. So you want to make sure that the tests don't fail, or maybe you don't even know to which version are you targeting. Maybe your fix is something PLON 4.3, specific not PLON 5 or 5.1 that we already have on the working. Maybe okay, you just fixed that small function or so that CSS or JavaScript, but is there anything else that needs to be added on the pull request, anything missing? Also especially for newcomers, we legally cannot accept code if they haven't signed the contributors agreements and all of that. So fortunately for all these things before, we have Mr. Roto, which is basically a Pyramid app that sits between or that listens to GitHub activity and then reports back to GitHub and tells also Jenkins to orchestrate everything. It's not everything perfect still. There's a few things that could be improved a little bit, but at least whenever you're creating a pull request, for example, it checks if all the people that made commits on that pull request have signed the contributors agreement. It checks if you have made any change on the, if you've dated the change log entry on changes RST and it tells you which PLON versions you should try to, you should run the tests before knowing if anything did broke. There's a few more things, but that's basically the overview of what Mr. Roto is actually doing, but maybe we could start doing more things. Like for example, something that we already do, but not on the pull requests, for example, is like code analysis. Well, yeah, in a way it's there. So code analysis reports, documentation, for example, if to a certain size, if you are adding a really, really big change on a package, probably that should be documented. Maybe if you add a complete new module or if you add a few more things, like a complete new package or a complete, like a bulk of code, that's a special, that's one that I really, I'm tempted to add that if you add a new pull request with a decent amount of code, but there's no tests being added, probably we shouldn't merge that because in a way it's not, so in Plon we have all this pride of having tests and we rely on tests to make sure that everything doesn't break and we get notified. But then you keep seeing pull requests and pull requests and pull requests and basically, I didn't make the numbers, but maybe not even 20 to 30% of them do have tests, maybe 40, so to say, but of course we need to add tests for, when adding new functionality, of course, if you just change functionality or fix things, again, would be nice to have them, but that's probably something that people used to Travis probably would like to have that we outrun tests. Of course, our tests are quite, quite slow to run, especially the pull request one takes like more than one hour and 10, 20 minutes depending on the day, so that's not really quick and quick feedback, but would be nice that maybe we can find some ideas on how to run that. That's something that a GitLab does automatically if all the tests, if all your integrations run fine, you can do a checkbox and mark, okay, these pull requests merge it automatically so you don't even have to do the extra effort. As we are open source, probably we want to get somebody to review the changes before that, but as recently, GitLab added this thing about these reviews that you can approve, a review we could automatically merge if there's at least one or two reviews, for example, or make sure or force that there has to be a review before actually letting a pull request be merged. Something that Eric will probably love, like that Jenkins does the releases by themselves, there was a talk early this morning about continuous delivery and one of the key points on continuous deliveries, of course, not having to do manual releases, that's part of not automatic delivery, continuous delivery. And of course there could be way, way, way more things to be done here. The code is here on Mr. Roboto, in Jenkins Planner.org we have all the configurations and all the ways to configure our jobs and all the integrations. I seem to have quite some time still, so I could go a bit more about that, but especially any ideas on everything related to Jenkins and the integration in pull requests that you find yourself that could be improved. I'd step back even a little further, because trust me, you don't want me going into GitHub, that's a given. I do a lot of user testing on it, and when I find either a problem, a bug, or a suggestion, I don't know where to go with that. And I realize you went into, I know the bug trackers and stuff like that. But we talk about removing barriers, I don't know where to go with that. And my feeling is if I go to the wrong place and I have to sign the bug to someone that's not there, I don't know if I'm going to piss them off, I don't know most of these people. And this probably isn't where you are headed, and that's okay, I just want to throw it out there. But I think there's some way for, even if you become like a registered super user, whatever the hell, I don't know, not a programmer, but that I can go somewhere, or we can go somewhere and put in a bug, in a bug tracker, and then someone else figures out where it goes. Sure. Well, that's one of the problems that sort of, or well, to put it in, so quick answer is that on GitHub, we have the PLON organization, GitHub slash PLON, and inside there there's like thousands of repositories, but there's the product CMF PLON, which is the main repository where bugs are reported. So if you don't know specifically that it's about PLON app content types or about whatever, just report there. Easy shop, easy store, easy place. If you don't even, if you are not really that much sure about if it's really a bug, or if it's just you or all these things, you can definitely, and I would say that even on early stages, so to say, until you feel comfortable with GitHub, so to say, because maybe there are people just close the bug with like a small reason, and then you feel like a bit back off. Then communityplon.org is always like people do report there, and then people just like point you to CMF PLON or the specific package or so. There's actually quite a lot of repositories in the GitHub, in the PLON GitHub organization that do not have the issue tracker enabled. Mostly to prefer to have that there. I mean, because of course the idea in GitHub is that every repository is king, and that's the whole wall, so to say. It's not for us that we have this. All our code is spreading so many packages, but we treat all of them as a single PLON thing. That doesn't work. That doesn't fit really, really well for us. But definitely either report them to communityplon.org or if you feel more comfortable to GitHub PLON, CMF PLON, product CMF PLON. What if from the product page, whether it's in PyPy or back in the old days when they had a product page, there could be a link to wherever the heck, wherever you created that product, where did they want me to make suggestions about, I don't even like calling it a bug because I'm telling you it's screwed up. You don't even have to classify. You just open the issue and say, well, I found this. People, opinions, please. Sure. Yeah, I'm not sure. Victor may say something about that. In PLON.org, do we have any direction page for like, okay, there's a, I found a bug or I have something. Okay. Okay. Okay. Sure. Let's see. The first order of business for you in particular would be to go to PLON.org slash support. Okay. Sure. So let's go to the bugs. So let's say I found the bug on Mosaic just to take one. Or I think it's a bug or whatever. So from here, obviously it's not security. All right. I don't think it is. So the bugs are not security related. So it tells you to either go to the specific repository if you know it or if not the catch all CMF PLON. Oh, sorry. I can't make that bigger. I didn't know this page exists, so thank you for showing. Sure. Well, that was because. But sure. But yeah, I'm not sure if maybe that should be something that on the control panel on the CMF on PLON's control panel. Sure. I mean, that could be like kind of like a nice place. Yeah. Maybe even more. Yeah. Remember all like a good liquor but then we'll show you isytillutfully. You, uh, find out. I don't know where we are using track for more. We are using also some, some addons where you can deploy it with it. Yeah. Yeah. Yeah. The projects. You can also use the list for managing issues. So, okay. That's what I'm trying to get to. Okay. There's one project already. So, they're in GitHub. Now you have this projects tab where you have a Kanban like this. And it's totally configurable. You can have as many columns as you wish. And you can drag and drop things so that you know the status. Well, I mean a Kanban thing. So, yeah, definitely that's something also to organize. I have a lightning talk that I want to talk about this actually. But again, even if, for example, you know that it's about Mosaic. And I could tell you that Mosaic is a still not core. So, probably it should not be on CMFplon. And especially because Plon of Mosaic already has its own tracker as well. But that doesn't matter. So, just always, if you don't want to think about it or just don't know, just go to CMFplon and report there. And people, mostly people are polite nowadays. So, no, but I mean it's, so, no, no, I mean the extra word that you can call is just closing it. So, no problem. So, yeah, that's the... So, Gil, what do you think about, you saw the lightning talk yesterday by that guy who... Yeah....the pyramid guy? Yeah. What do you think about that? Well, if I'm not mistaken, that's probably when I... So, maybe let me just quickly... Oh, yeah, exactly. So, this, when you see this logo here on the status of a pull request, if you see this logo that's from Mr. Roboto, that's what the project I was talking about. It checks the change lock entry, so they did here on the changes, they did add something here. But then the contributors agreement failed because, in this case, Nathan just commited with a random ID, well, not so random, but an ID that's not known for Github. You can add those names and emails on your configuration, on your user profile. I think your question though is more important for... Yeah, I was... So, I talked to one of the guys who runs a... His company is to help people do their governance, and he said he'll help us get it sorted, but it's going to be like the Americans get to sign it online. And Europeans have to do a physical copy, so because there are laws that are restricting it. So, that's what we've kept it as one thing, so there aren't two different sets of rules. So, it's kind of restricted by the Europeans, like the Germans are very strict around it, and we're going by that strictest rule right now. We'll bring it back up again and forward again, because we bring it up every year, I swear. So, it'll come back around again. He said he'll have his attorney help us once we have a plan, but we don't know that we'll be able to have a plan to get rid of that, click a button in your contributor thing. So, because it's more structured by the laws of other countries, of the EU and other things. So, it may be stuck with it for a while. Well, still by now, I mean, that was... So, that was a bit unfortunate, because Mr. Roboto, it's enabled on all the repositories, but for the documentation one doesn't... you don't need to have a contributor's agreement. And actually, I think that either this one, at that time, or another one, I already turned it off. So, nowadays, you don't have that. But still, I think that the other problem was that it was pointing... So, I'm not sure if it was in that case, but there was some other times where the following some links or just googling it went up to a place where it was telling you something that was totally updated. And I guess that it's already been fixed. But so far, right now, at least, what Paul... maybe that should not be recorded. But Paul always says that with the PDF, it's enough. You just print, you print, sign, scan, and send. That's not that much. That's not like perfect, but that's fine. And especially if it's just the first... it's just like one time, not like every year or something like that. But yeah, but for example, so my idea with... The idea why Mr. Voto was created was basically for this thing here, for running the tests and everything. But we certainly can expand just like, as I was saying before, that documentation. So here we can even provide welcoming messages or where to point to, or maybe if it's... We can even monitor pull requests, and if within three weeks or a month there was no activity, then point to the person if it's still interested or closing it, things like that. And at the end, the more you automate, the less that you have to think about it, and then everything is... So it's this self-service in a way that... Or maybe like if everything is done, but nobody merged the pull requests, maybe like trying to put some random names, like the few last persons that touched or merged pull requests, and then put them on CC, on a comment and say, hey, can you, as you were the last ones involved in this package, would you mind getting a review of that so that we can move things around? Because of course you always have the notifications up here, but after you've seen it, if there's no more activity, it was merged, maybe not, maybe yes, who cares or who knows. And of course for newcomers, probably they don't feel like, hey, can somebody look at it, can you look at it? Which for already core developers, maybe that they will feel totally confident on doing that, like hey, fold your police review app, but maybe others will be a bit more hesitant, or maybe don't even know how to ping as well. So if there's any ideas, or if you happen to like all your pain points while contributing, please either put them here on CMFplone or so so that we can discuss that on community. First of all I want to say thank you for all the work that you've done because it really has gotten so much easier. So much easier. So that's great. One thing, I know that, where is it, labeled, I know that Yens created a whole set of labels, a whole set of what you call them. So he sort of like put some categories on the labels. And some of them are actually sort of workflow, really, like the tool. Yeah, and I'm still so confused about those, I just never know what to do with those. Yeah. So if any of that could be automated in some way, that would be great. Well, so as the running tests, it's so much time consuming, and maybe you just create the pull request because you do it like through the web, you just edit the file, but then of course, like for example, there's a change of verifier that will probably will yell you that it's missing and so on, then you edit. I was thinking that probably maybe, so that before, so to automatically treasure Jenkins, so automatically run these jobs, if you add a certain level, so like for example, these testing, so as soon as you say it's on testing phase, then you can start. But definitely, like if more or less the idea of this is that it's like a confirm in progress testing ready and deferred, and probably we could like automatically add the, well, in progress probably. And then once you manually move it to testing, I mean, we can of course just create some comments here like, please add this level as soon as you feel that these are that. So we could add the testing level or if manually added the testing level, then Jenkins will start. And then if everything is fine, Jenkins can change that to ready and that can treasure that. Or that can be the final states of the way in this pipeline. And then if three weeks later all the pull requests that are still ready, but nobody emerged them, then start pinging people every now and then. And then maybe you can just move it to deferred if you don't want to get bothered by that because maybe it's been, it has to wait for something. One thing I'm not sure if there is anything that's matching, but one thing I noticed is that a lot of times people create pull requests that are just so that tests are going on. Tests are started automatically and they say, please don't do not do anything with it, do not merge stuff. That probably should have a tag. Yeah, I'm actually, I'm sure that that would be a good idea. There's another thing that all these checks here, I can't remember really, but I think it's somewhere here on the settings that you can make them mandatory. So like if that check is not green, cannot be merged. So in a way that's probably something that we want to do at some point. I can't remember. But the thing is, yeah, there's all these work in progress pull requests that you don't want to merge and actually Jens on one of his cleanup pull requests tasks, he just ignores everything and he says, oh yeah, the label was that and that, so I did merge. And he was like, well, yeah, but I did say there. So if you didn't, so there's, for example, in GitLab, you can, if the name of the issue starts with work in progress, so WIP, then it doesn't allow you to merge. It's already, it already said, well, there's, so it's kind of like here because now there's conflicts. So yeah, there's some process that could be improved here as well. It's a minor thing. Again, it's some pains that you take away from people and like, or especially when you have pull requests that depend on other pull requests and they have to be merged together or sequentially, so to say. Then of course you want to really do that. And that's again the problem with GitHap and having that, and having the, just the repository, it's all the limits that I look for. On the other hand, it's also our problem for splitting into multiple repositories. There's lots of, there's lots of larger, larger organizations that, for example, I read about Google and I think even Microsoft for some projects that they had like usually, I mean, just like us, like they had 20 hundreds, hundreds and hundreds of packages. They decided to just create one massive Git repository. Of course it would be like a bit of, like a bit too much maybe like if you look at the drop down of branches or so on because can be like a bit daunting. But then everything about testing and integration and releases and everything and conflicts and so on, that will be just so gone. That's, I'm sure they're probably thinking about no way to accommodate this kind of use case. Maybe. We don't know, but it seems like a very common thing. Sure, but they also target, I mean, the year, I mean, well, now that they added these reviews and these projects starts to feel like they are really moving, not just the code repository and code review, but also to project management. Maybe they do that, maybe they just do that for for enterprise, the enterprise version knows. But anyway, that's still a problem for us, for ourselves. We need to find a solution that works better. And because so far we have just patches and the way Jenkins is configured so that we can do all of that, it's sort of insane. If you compare to just like, for example, what people is used to that it's just Travis, like you just put that file connected to Travis and everything works. Sure, we are we are not a single package. We cannot really do that. But we should be able somehow to to get closer at least. Yeah, and a big plus one for the idea of adding a requirement to add documentation. The this change lock verifier you mean or for the for the docs. Yeah, the problem with this is that as here you can only have well actually we have a lot of them here you can have like, okay, working and failure. And it's kind of like, well, if we this if we agree in what's the logic to decide if something should have docs or not. But then it's also like, well, maybe I refactored something. But it's already explained it somewhere because I'm just merging two packages to say in the package it's already well explained. So that's a bit of maybe maybe just a common. It's probably enough like, okay, you are adding quite a lot of changes here. You may want to think about going to the documentation repository and do contributions there, explaining the feature, or at least, at least creating an issue explaining that you are about to do that so that we document things. But yeah, definitely that's, I mean, when, when previously there was this, this talk about plan versus versus Drupal, their Kelvin was saying that, yeah, plan five, the plan is really good in terms of documentation for upgrading because there's all the documentation. But at least for the plan five one that was most of the things were there, lots of things actually were there. But there were like some big gaps on that. And I don't really see as a community making much much change in there. Again, it's on the on the on the app side. I feel myself contributing quite a lot on the documentation because I know that it's maintained and I know that there's people there caring about it. So that's also something that probably already started if we have if the docs team has also is also moving more to engaging a bit more with with like continuous delivery or so that you can see the fixes that you make. And that's probably a bit more appealing to contribute there and like, okay, there's all these. I mean, every now and then I'm just fixing broken links or things or links that point to sbnzoop.org or things like that and just point them to the new places. But that's already like the cleanup and minimal. Yeah, well just checking in. Okay. Oh, don't. So thanks for listening. I know that not going to David Lee was was challenging. I mean, probably everyone was there. But oh well. Yeah. Yeah, yeah, but I want to at least I mean I have I have no code done for that. Although would be tricky to do. I mean, will be trivial. Sorry. But in a way, I want to add that. Maybe not as a check like you have to add them.
Your first time contribution to a large project is always a bit daunting: lots of things to learn and read upfront. Even for seasoned contributors there are always some rocks in the way.
10.5446/54063 (DOI)
I want to talk about our relaunch project of our main website www.finv.ch. And it's a university in Switzerland. And as everyone else, we are struggling with a similar situation. We have an external design and want to integrate it with the rich blown editing interface. And I want to give you some insights of the project, the process of development and the tools we developed to cope with this situation. And actually, blown empowered us to do really good stuff and really build complex scenarios very quick. So about the university itself, here are some numbers. We have approximately 10,000 students. We had in our old website, we had 400 editors. We stripped it down to or tried to strip it down to 40, which is easier to support for us. It's nine departments and they are spread over for Swiss cantones so that it's on several places. My job involves a lot of traveling from one place to the other. In our old site, we had 118,000 records approximately. It was used for an archive. Nobody wanted to leak anything. We want to get rid of this. We want to make it leaner, new site. And we are a long-term user of blown. We use it since 2006 and we started with version two. Quick question. So your old website was blown too? Yes. So all the websites were blown. We did all the steps, 2.53 and the new one, 4 and now it's going to be 5. So the basis for the project was the web strategy. And here listed some cornerstones of the web strategy. People working in universities probably familiar with this. They are all kind the same. When the first websites, not the first websites and the 2,000-ish websites, the universities mirrored their internal structure. They had a landing page for every department and institute. And now they want to get rid of this and to make it more approachable for people coming to diversity for studying purposes, for training purposes, for research. And we want to do that as well. And therefore we want to drastically improve the user experience. Like no one else others, we decided to do a Big Bang release, which means we do no content migration but starts from scratch. With the content, with the layout, a new CMS, not the new major version of the CMS. So everything set to zero, which gives major problems because you don't know where to start really. And as I said, it involves no content migration. So what we decided from the development perspective, we decided we want to go design first. So this is the major cornerstone and everything, the content and builds around that. And the information architecture, the design was really the first cornerstone. Another cornerstone for the development was the split of front and then back end. This is what the guys doing blown server and REST API are going to the, bringing to the next level. We couldn't wait for that. We looked into it. We had performance problem at this stage without server side rendering because if you do the naked REST API and then you check it into a static page, the page takes quite some time until it's fully there. Now with the new versions of Angular 2 and React server side, rendering is possible. So it might get easier, but we do the rendering of the templates. We do it on the server side. We did some evaluation and decided to go for blown five and use Mosaic. Recently switched to Mosaic 2 and we're very, very happy with it. So thanks for all working on that on the Laptic sprint. And another strategy was stay close to the core to make future upgrades easier. So here brought you a picture of our project set up. As you see, it's a very complex scenario as the university itself is complex. The project itself was too. And we had several teams, some of them were internal, some of them were external. We had diverse tools we used for communication and for storing content. So and therefore we developed a communication module. And again, we orientated this communication model on the layout, on the design. And what we did, we took the layout, we take the layout of the page. This is actually the new layout of the site and split it into parts. And we call this part widgets. And every widget has a name and a number. And through all this complex people set up working in the project, know exactly what we're talking about when we talk about WE007, that's the navigation widget. So this is the back end. If you use blown, this looks familiar, this is the Barcelona theme. So it's basically the same page without the teaser if you switch to the back end. Nowadays, Wisiwik is not a major issue anymore. Or it is a major issue because there are so many devices and so many browsers you have to deal with. It would be a lie to say what the editor sees on his screen will be representing that what the user says, because you actually don't know this. And how did we achieve this? Here is the code we used to this. There is a hidden feature of in blown up seeming, which is called the seeming policy. Do you know of the seeming policy? I suspect so. I don't know if it's documented, but we found out. Basically, the seeming policy, what it does, it takes from the control panel the setting and applies it in the registry. It uses the parameters set in the control panel for stuff doing with the ASO. What we did is we overwrite two methods. It's the get current, seam and get settings. We have a method, the front end check, and we check for a pattern in the URL. This is WebCMS. It's in the URL. We go to the back end. And if it's not, we go to the front end, which is basically the one we selected in the control panel. And here's the basal net. Same thing is the settings. Here, we needed a little trick because in the default settings, there is a records proxy element, and this record proxy element writes to the database. And it's cached. So we need this no write records proxy, which is basically a dictionary which is stored not in the database. And we return this parameters on the fly when this method is called. And this helps us to split the front end from the back end when we need it. So I'll give you a short demo of the tool we used for the front end design. It's called Estatico and it's open source. And it was developed by the agency which supported us. Where's my courier? Here. So this is what it looks like. So the front end designers, they do builds on every commit for the front end design. And what it puts out is a preview and it's basically a whole set of HTML templates. CSS, JavaScript is all in place. And this was actually very good to show to the people before we implemented in blown. So we could give it to our stakeholders and say, look, here the bachelor landing page will look like this. And they could here see it all is working, all is in place. So the navigation, all this dummy data of course with static dummy data. But they could look at it and say, okay, this is what I want. Here is all the accordions are also working because it's all JavaScript. And then before we implemented in blown, they could do the check and say, okay, yes, that's what it wants. And all this are split in the widgets. And I'll take here a teaser widget. This is one part of the page. So all these pages are composed of these widgets. And we see on the widget page, we see the preview, how it looks like with some dummy data. We see the code, which is actually a handlebars template. Handlebars is a very popular templating or I don't know if it's popular that they used it. They told us it was popular. I didn't know it before. And we have some dummy data, which is basically a JSON record. And what we do, we copy this output of the template and this dummy data into our Python structure, which is, since we use Mosaic, which is tiles. And as I said, we do the server side rendering. And to make this happen, we needed to render this handlebars on the server. Unfortunately, there is a Python implementation of this handlebars. It's called PyPars or PyPars3. I think it was developed by Canonical. It was then abandoned, but now it has a maintainer again. And we wrote the blown wrapper, collective handlebars, which provides us with some basic classes. And all we have to do is to override this getContents and return the JSON structure we have in the Estatico. And we fetch the data from blown. In this case, we get it from a tile configuration, but there are other use cases, there are other scenarios. Sometimes it's content listings and sometimes it's just static content, so it's different. But the good thing about this, we just copy and paste in the first place the dummy data we get from our design agency here. And we have it when it drops out by the agency and we have it in blown. It's a matter of minutes. And people were really impressed of the speed of the development because the moment they saw the design, they could immediately play with it with blown and then we iterated to replace this static data by like this with data we get from blown. And at the end, I want to share some performance tips we found. It's basically the same as everything that we use rail storage which gave us some increase in performance. We use heavy use of memcached on two layers. We use it for the rail storage as a cache proxy and we use it for the ramcache. What we discovered is that it's important to have a CPU with high clock rate. We used to have a one-size-fits-all virtual machine which had many cores but not so high clock rate. And we exchanged, we now have a dedicated server in our virtual environment with a high clock rate and not so much cores which is far more efficient for our use case. And the third one is, I wonder why it isn't more popular and more documented and teach. The biggest advantage was by delivering all the static resources, also for the back end, also for Barcelona, through the Apache web server. So we have our web server catching all the plus plus resources and it delivers it immediately. So the back end isn't hit and this gave us the most performance improvements. If you've all seen this page, help us improve. Please fill the contact survey. And if you have any questions, you can reach me on Twitter or on GitHub or you can talk to me. Thank you.
Tom will outline the development process, the tools used, and show some of the Plone code that made the relaunch of this site a success. Tom will describe the WebRelaunch project at FHNW - the University of Applied Sciences Northwestern Switzeland. He will cover - The lessons they learned about taming Plone 5 and plone.app.mosaic, - Working with diverse external partners who had never heard of Plone, - Completely separating the front end from the editing area ...and several other hurdles and goodies they encountered on the way to their new website, which will be released soon.
10.5446/54102 (DOI)
Yeah, good evening and thanks a lot for the invitation. I have now a five minute statement and I tried to make it really five minute. I was quite frustrated by going through the projects and choosing pictures because I always want to show more and more and more so I had to put off and off and off. But I also thought that I would maybe dig a bit in the project box of ours to say a little high to assemble and start with a project back in 2008 in London and the thing with the discussion of what is the role of the architect. We transformed it in our projects often in something I would call the architect take over a role. For instance here we became lifeguards which is quite a simple role overtaking. We have been invited to this architecture festival in London where we were asked to make a hub in the south of London. For a few days we found this place in 100 Union Street where a big building was on a little bit before and eight weeks after the developer wanted to build a bigger building of course. So we had a gap of eight weeks. We thought we would take three weeks to build, three weeks to be on and two weeks to dismantle. So the two days for which we were asked for which is very temporary. We extended it to three weeks which is still temporary. The lifeguard made a Lido, a temporary swimming pool where we then set three weeks and lived on site in the neighborhood where we had probably too frustration of some of the people who heard there will be a new Lido. Only a 20 meter long pool in one direction and then this two meter deep diving pool. We had some facilities and we had a lot of people different ages and different social background I guess from the neighbors to the architecture scene of London during day and during night of course. And that is one image of this Lido while activated. We had to negotiate very hard with the land owner in this first intervention and to sign of the land owner the developer, the owner is known. We had to sign several times the contracts and during the three week opening he kind of changed his mind maybe because financial crisis struck him or maybe he found interesting to keep on with such interventions which he did for the next year. So we had ourselves some projects on it, I don't go into it, five minutes not enough. Other collectives have been intervening on. Nicholas my dear colleague of Exist still lived on the site until some months I think. I saw in Facebook the closing party going by so that was a very nice kind of temporary project goes into long term experience project. And on the other hand we also had very nice and dear colleagues by the time volunteers on the site. 2008 one was Maria from assembled from to become assembled actually. We had Paloma letters from practice architect from to become practice architecture. We had only rain right who's now writing in the garden so it was kind of an interesting hub by the time. And it was a piece of a left over or piece of private land which we turned into a public space by taking this role of the lifeguard here another project in London and Dorsenburg we've been invited the year after for five weeks experience where we is a little bit extended this idea of taking a role over towards a certain uncertainty. We decided to become Miller and of course nobody knew how that goes technically we knew so we built this mill and then while being Miller we also installed the bakery and there even less knowledge on our side. So the first thing was of course to encounter the people who knew how to make and how to bake and that was what happened actually people would take the place over slowly during this five weeks after we agreed on we had to dismantle but on site there was a certain community and power which then went to the Hackney council and claimed for continuity. So from this Dorsen mill where we've been five weeks in 2009 a project came up which is still there which called Eastern Curve Garden maybe some of you know it and we had the honor to be invited again to make a temporary not a temporary several years long temporary shelter which was this roof which is the Dorsen Barn. So again this time a leftover which became public space this project here was further down in the south in Madrid where we've been invited for one night intervention where we said also here one night is maybe not enough for us to initiate something so let's try to push it which was a big negotiation with the sponsors but we had the collective basorama behind us who invited us and we proposed right in the middle of La Latina in Madrid where this building also just was turned down the swimming pool public swimming pool and then afterwards developer plans got bankrupt or corrupt whatever so there was this huge left over this huge empty space empty plot in the middle of the city which we then filled with such a wooden structure a city island we called it it was a sort of a rainforest thing five weeks it was open and then we went through several conflicts I don't want to describe it deeper but this idea of a conflict you can sometimes generate in such a situation where then suddenly people would not only be there to consume but to somehow have to bring themselves into the situation to have to engage came here in this project really to a strong movement of neighborhood and architecture collectives and this place is still open it's called now a Campo de Cebada and this may be also known from some of you in Madrid as Aline was saying often in our practice it is about the workshop the construction this idea of bringing together the moment of thinking and designing of a project conceiving it and building it just some images of different construction sites on our projects sometimes like this one 2012 we invite people to join and then we open it up and people would join just by curiosity of what we are actually doing for three weeks building was for a capital of culture in itself a temporary event we knew that it will stay nine months we used 40 cubic meters of wooden boards we had 50 people on board more or less and that was what we don't what we didn't tell to the creators we had in mind to dismantle after the nine months and to use the wood for another project either us or the people who participated and that happened then Casa Duvapual is born 2013 one year later we went down to the south and built up in a very interesting territory of informal settlements this kind of little incubator of ideas that the Casa Duvapual which is a steamy structure without any predefined content and which would stay there one month one summer and fill through our activations and the interest and the participation of the people with content such as this skateboard ramp you see here that was a initiative for French artists who thought that it would be very interesting for the local kids who then rather play chess in front of it and the artist themselves used it but on the other side the library which is was in this very small room upstairs and which is still existing now in another project and which also got a new child a new library so in the area two libraries came out of this and still a lot of projects we are we are continuing to go to who's behind all this it's not only me of course this is a moment photo of image of what Anlim was mentioning when we've been by the time in Venice in 2006 everybody had these yellow shirts it was a common thing everybody has a lot of arms which is kind of represented for the way we're seeing this collective idea of working together you have the arms you know you bring in your skills in my case it's carpentry but we have the graphic designers we have the cook we have the clean peep cleaning people the DJ etc etc and the other arms is hard to help the others that is an image of this project we did which was also a temporary project due to the fact that be another three months long but afterwards people would come to us and ask us if they could reuse material and then by being interested in where this material would go to and why they ask we discovered that they do this quite frequently and that that the idea of reusing material became after the next beyond two years later the movement is called a rebion other which is still also ongoing so can design change society this would be a question for our rough Seraphine who was 74 when he died and he was the oldest member of our collective and he would ask he would answer rather that society shouldn't change the designer thanks a lot
Few contemporary practitioners rival Assemble Studio in their dedication to overcoming disciplinary limitations, cutting through the fragmentation of modernism, and changing daily life through community-oriented design – for this very reason, they were recently awarded the prestigious Turner Prize. After her lecture, Amica Dall of Assemble will discuss the social role of architects and architecture as well as the relation between craftsmanship, production and the city together with Alexander Römer of constructLab, and Jürgen Patzak-Poor of BARarchitekten. The lecture and discussion will be held in English. Introduction by Rainer Hehl Moderated by Philipp Oswalt and Anh-Linh Ngo
10.5446/54111 (DOI)
Thank you everyone for coming and for this incredibly nice invitation to do a kind of a duo, small duo talk and after to discuss, well, the idea of architecture and photography when it collides or when it intersects or when it has something to say to each other. I will show two projects that have finished this year. This doesn't mean that the photos are from this year, but they both finished this year. And I like always to start, well, I have a few starts for lectures and this is one. So I made you look at this photograph already for a very long time. The photograph is called Future Highway, Xiamen. It's in China. I was living there for half a year. And I started to document this kind of transformation of, let's say, even recently made architecture. So this is actually the demolition of a house that was not there for longer than five years in order to build a highway through it. And why am I showing this? I'm showing this because I want to explain something about, let's say, photography and reference or how a meaning of an image can change very fast and in a very simple means. When I was working on these photos, I would make a quick scan and I would send this scan to friends, to architects, to other photographers, and then sometimes you get a reaction and you understand whether an image works, yes or no. So I sent this one and I sent it to Bart Lotzma also and he sent me five seconds later, he sent me this. And I was completely shocked because I didn't know what it meant and, well, clearly there's a kind of overlap of the two. So it's a tarot card which is called the Tower. And when I quickly googled what does it mean because I don't know tarot. And then it said, if you pull this card, your understanding of reality will change forever. And I thought this is really incredible. This is in a way what photography sometimes also can do when you depict a space, it somehow is never the same after. And I thought always this was a very powerful metaphor to understand photography, but also to understand how an image lives in a world of images, of other images and depictions, and these depictions have meanings. So the second, well, actually the first thing I want to talk is a book that just came out. It's even not for sale yet, or actually it's for sale, but in London. It's called The Construction of an Image. And it has this photograph on the top which is called Ring Road Houston, which I made in 2005, so already 11 years ago. And it's somehow a kind of a strange image which somehow represents my work, but I never used it as a kind of representation of my work. But if you try to find an image online, then this is normally the one that pops up. And it has the AA Gallery in London. And they asked me to make a book just on this image. And I agreed, and this was three years ago, and it took me three years to kind of figure out how do you make a book on one image, and how do you go back in time to understand your own work from a little bit of a distance. I did basically two things. I made a little reference book, which I normally do, for each book that I'm making, which contains, it's a very simple thing, and I will show it a bit later. And I decided to go back to the sequence of images that I made before this photograph came about. And I never thought about it in that sense, but basically it had a very clear chronology from the moment I arrived in the United States, and then two months later when I made this photograph. So I will take you quickly in a very simple way through the chronological sequence of that time. And sometimes I will tell something with it. So we start in a way with the dome of Bachmannstaffuller, which is in Baton Rouge. It's a forgotten dome. It's extremely big. It's 100 meters wide. It's one of the only ones that he made with an external structure. And it stands in this landscape in the middle of nowhere, completely forgotten, rusting gold. So this was in a way the entrance for me to America, so not the kind of a very designed landscape of Holland or of Europe, but this kind of a landscape where everything can be left and things are, you know, when you need a new space of land, you just start next to it and you don't clean things or you don't remake it. In the same travel there were landscapes with the kind of inverted object, object, in this case a former volcano, kind of light nuclear waste storage site, a dam in Central Valley, which is a real dam. It's not a construction, anyway. Then a kind of a thematic of a horizon started to appear. So in a way they are not maybe super interesting as photographs, but it's much more about the rhythm of how they were made and they were made literally after each other. Sometimes an idea transfers from one image to the next and they kind of intersect. Another empty reservoir. I visited a lot of water reservoirs. This was one of the teams that I was photographing, but they were all empty. Then other pieces of architecture, so after Buckman's Fulor, other pieces of architecture started to interfere, so they are all in a way in states that are almost not recognizable. This is the Ennis House by Frank Lloyd Wright in Los Angeles. It's almost similar to this overflow of water in San Gabriel Mountains. This one is called Concrete Rundown, a failed utopian experiment called Lana del Rio in the desert. A hidden house. Another piece of famous architecture, but a bit hidden. It also tries to relate, in a way, the architecture to the landscape or the kind of relation that a piece of architecture has to landscape. This was the intersection where we were living. It's called the Great La Brea Olympic. Then slowly it goes to an end and this is Philip Johnson's extremely good and quite absurd church, which is next to the entrance of Disney World. It's almost an office block and in the inside it's completely empty and it has this technology that is standing there and it's a church. It's a church where they tape each week a TV show, kind of a TV preach, called the Hour of Power. It's real architecture, I think this is very important to understand. Then a few days later I was passing this park. It's a park where there's this perfect building in the back. I tried quite a long time to understand how can I construct. I thought it was an interesting place, but how can I construct it to put this image back in architecture. Then I remembered these famous collages of the Super Studio, of the continuous monument. I all of a sudden understood if I take off the top of the building and if I take off the side of the building, then somehow I can recreate this idea of the continuous monument without making it very explicit. You could almost imagine this built piece as something how the continuous monument would have looked. In a strange inversion after, we come back to the subject of the book. In an inversion, one week later I was in Houston, and I was actually in Houston because, well, it was a longer story. I wanted to go to Houston because there had been the Hurricane Katrina and I wanted to see the devastation and the ruin of what it did to the landscape. All of this I didn't photograph, but on the same day that I went through this kind of ruined landscape, I decided to photograph this office block. The office block somehow, I don't know, for me, all the images that I made before started to add up to each other and somehow came into one image as a kind of, I cannot say collage, but each element, so each layer, the horizon, the kind of a mirror of the office, the kind of capturing of the landscape within a building, the kind of a, let's say, ununderstandable relation of landscape and objects. It all of a sudden was all in one image. So next to this series of photographs, which in a way explain the kind of making of that photograph or the conception of it, not the technical making, but the kind of thinking of it, there is a little reference book that normally I never show, but in this case, for the book I made an exception. So in a way, it is a very simple, I have a huge amount of reference images on my computer. I print them out on A4 and I fold it and there's a little A5 reference book. And these reference books, they somehow guide my view and they somehow tell me what is the relation of the image that I make with images that are existing in history or in the depiction of architecture or how you could read it. The Ring Road Houston, which I call normally the Golden Building, it's for me it became this relation of this kind of camera obscura, this fact that you have an object through which the landscape is captured, can be seen, the kind of grid that is on top of it through which you look. So I show a few pages of these reference images, so it's kind of playing out between these grids through which you look, through which you measure the world with the durere on the one side and the Ford Foundation on the other side, in which you then all of a sudden also understand that this Ford Foundation is not only a kind of first incarnation of this atrium, but it's also a device to look at the world in a very structured way. The observatory of Robert Morris and the burning of the dome of Bachmannse Fuller, because Fuller in a way is a very important person in the story. And Hans Schab was on the left and a super studio on the right and the super studio image I didn't know when I was making the image and it came only after and it somehow was logical that the image of the Golden Building had a kind of presence in other people's minds and somehow I didn't have to do much for it. Asphalt Rundown by Smitson and Displaced Replaced Mass by Michael Heitzer. And then this I really like, the Mies and the Philip Johnson in which Philip Johnson's glasshouse, I think is a kind of a perfect example of how he is misunderstood in the sense that you have the beautiful painting of Poussin that he already had before, which he puts as the only image in his glasshouse and he reshapes the whole territory that he had after the painting and he puts then later this glass box over it. So you can really wonder how much it is about the aesthetics of the modernist architecture. You could almost claim he tries to live in the painting and with Mies on the other side, which is then the totally harsh opposite. You can play with these kind of references and you can let this image live in it. On the right you see kind of a, well, how to say, the appearance of Virgin Mary in a very banal office block. There were almost a million people who came to look at the banal piece of architecture and the discoloring in the façade, which I think is absolutely fantastic. And then here, of course, it's the image as an object. Now I go maybe a bit faster and I show you one exhibition that I made this year in Paris. It's called Earth Pillar. It has a set of works that I've been working on for the last maybe 10 years and it is a correction on the, I would say it's a correction on the fact that most people now see me as a person who photographs architecture. I think I photograph landscape with objects in it and sometimes this object is architecture but it's always the relation of the object and the landscape that is together. So the exhibition in a way was a way to explain again that all the work that I'm doing is mainly about landscape. You see? So we start in Petra with a kind of perfect inversion. It's a room made out of landscape. It's also kind of upside down image because you always want the kind of darker part of something on the bottom. If this image is lying on my desk and someone passes, they always turn it around. They think that the dark side is at the bottom and the light is at the top. And I like to play with these kind of conceptions of photography and of image. The exhibition was called Earth Pillar and it's after this, it's the title of this photograph which is an extremely beautiful man-made piece of architecture in Gene. It's the great mosque of Gene. It's an adobe structure. It's extremely big and the columns are in a way, the columns take more space than the negative space through which you walk. So the column is maybe two by two and the space where you walk is one meter wide. It's a very strange experience to be in this kind of forest of man-made objects and you're kind of caught in it. And I like to show this because it's also kind of a half of a drawing. You're not sure whether you're looking at reality or not. So when you enter the exhibition, it has two works of a series that I made for the Venice Biennial in 2014, which is, you see it on the right and you see here the one on the left. And they are depictions of a room in Siena. It's called the Room of Peace where the first depiction of city and landscape ever made is in the space. And the space was actually meant, it has a double image. So we now look at the so-called outcome of good governance. And on the opposite side, there is the so-called outcome of bad governance. And bad governance means war and tyranny and everything is in a way bad. And the decisions of the city were taken in this room. People were sitting under this, they were not sitting on these chairs, but they were sitting under the good governance and they always had to look at the bad governance. And I was very fascinated by this idea that an image has a very direct relation to the decisions that are taken. So the image is not, in a way, is not free, it is not taken from somewhere else to represent something, it is really taken to tell you, please take care of my city. So I decided to propose this project for the Venice Biennial and the team of Colla has decided it's fantastic, we put it in the first room that everyone has to get through. So you had four, anyway this is one side, this is good governance, so I depicted the room in a detail, a piece of the painting, a section and a detail of people. And this is the other side, it's the bad governance side. And you had to go through its gravity, so you were walking through the gravity of these two, let's say, long lines of photos. So the exhibition in Paris starts with these two depictions. So this is the depiction of bad governance, the bad landscape where you see war and you see raping and you see burning villages, and also by a kind of a sheer beautiful coincidence, it's the part of the painting that is completely destroyed by moist. Here you have the depiction of good governance, the good landscape where it's prosperous, where all the four seasons are connected in one image. So we go from here, there's in the exhibition you can go two ways, so you can go, in a way, from bad governance and you can go from good governance. So let's go from bad governance. From bad governance we enter to another set of works that I've been working on but never really showed. It's a work that I'm doing in Amazon already for maybe five, six years, and it's on gold mining. So I photographed their illegal gold mining and legal gold mining. And the illegal one is really kind of a half hell on earth. It's really this kind of a turning up of all the soil. And I picked, especially in this case Surinam, because it has this incredible amount of colors in the landscape. So photographically there is a lot to work with. So this is iteration number one. Here you see it a bit better. And you see the people, two people using the hoses, then to extract the gold, and then they use mercury to burn it out. And this is the second one of the same series. If you go further into the exhibition, you get into a more abstract zone, which in this case is Petra again. This is fully natural, but only the fact how I photographed it makes you somehow understand, almost feel like it's almost a man-made object. And I wanted to, in a way, come back again to this idea of reference. So reference for me means many things, but it means basically that you can use an image or a depiction that has been made by someone else, and you can update it, you can do it again, you can improve it, you can put it back in our time. So in this case there is this beautiful picture of Mimo Yodici, of Petra, where he focuses on this whole. I think it's a fantastic picture, but I also think you cannot make it anymore. It's too romantic, it's too kind of... It's not, in a way, critical. So I tried to work with this idea, and in a way this was my response, in a sense. I photographed it in a sense that you almost start to doubt whether this piece of the mountain is real, whether these puzzle pieces are just fell out and behind it is nothing. What is this relation of these puzzle pieces? Are they broken off? Are they just broken off? So there is a lot more at play, and in a way it's updated to our time, and it's also updated to my oeuvre and not to everyone else. So if we then go back to the first room, and we are in so-called good governance, which I mean is a kind of a funny play of words, but you see already in the back you see a much more abstract image, which is a cave, which you saw already on the other side. It's a grotta gigante in Trieste. It's an extremely gigantic grotta, and it's the first work that I ever made that was just nature. Well, of course it was underground, and it's almost a painting, it's almost a drawing, and you want to see 3D things in it that actually don't exist in reality. In the same room there is another depiction of the gold mines, but photographed in a more, well, let's say less apocalyptic way, in which you focus much more on all the minerals that are actually in the ground and that are discoloring when they are unraveled. And a third image on a sand mine in Mali, where you really see them. You see little men in the top who are digging out sand, and they throw it down, and in the bottom it's salt, but it's also almost a city, it's almost a piece of architecture that's been carved out. And in this room is the image of the earth pillar. Then we go to the last room, and it encapsulates a set of works on the wall that is surrounding a temple in Egypt. It's the temple of Dendera, and we went to visit there because I was doing another project on Egypt that I will not show today. And by coincidence we came there and I got extremely fascinated by the wall, which basically means you go to an object which is very famous, and you turn yourself 180 degrees and you think, but what I see now is actually much more interesting. And the friends that were with me, they were laughing, they were architects, but they were laughing because they said, you will be famous for photographing just crumbling walls. Why don't you want to make the real thing? But I thought this was very interesting in the sense, in this idea of drawing, of wanting to see other things in an image. So I immediately understood this wall as something in between a landscape and an object. It almost depicts a map. It's pixelated. It's not anymore analog in a way. It seems almost digital. And I thought this was very interesting. So I had a set of these works. There are actually three, and they are in the last room of this exhibition. And then I will make a small jump. And this is also the end of the talk. I make a small jump because I think photographs, they have to be also a bit free of their form. So at the moment when I was, and we're talking also about collaboration today, and I wanted to show this work that I did together with Office Casting here at the Afton Saver for the Schenzhen Biennial, we were asked to make an object or something, a contribution. And we very quickly said, it's very interesting. Let's make a wall. Let's make it, let's make the wall by image. Let's have the image as the main element of the architecture. And how can we do that? It should be as big as possible. They immediately said they wanted to make it out of brick because the image has bricks. And the outcome was very simple. If you want to build a big wall, very high. If it needs to be structural, if it wants to be thin, it needs to be circular and round, otherwise it would fall. So we asked for an object, five meters high, eight meters diameter, and this is what they made for us. So basically it's almost a description of what you want, and it was made in this sense, and we sent a very simple file, they printed it. It's very low-tech in a way, but the experience of the space functions very well. So here you see the slice of 60 centimeters wide, through which you would enter, and you would be surrounded in this kind of panorama of bricks that almost become 3D again. So thank you very much, and later I will show another set of pictures, but only after Christian has spoken.
mit Christian Kerez und Bas Princen, im Rahmen der Reihe POSITIONEN Donnerstag, 8. Dezember 2016, 19 Uhr, TU Berlin Architekturgebäude, Berlin Moderation: Anh-Linh Ngo, Sandra Oehy Gibt es eine Rationalität des Raumes jenseits des Menschen? Ausgangspunkt der Diskussion ist Christian Kerez’ Beitrag Incidental Space für die 15. Architekturbiennale Venedig sowie Bas Princens fotografische Arbeiten. Interessanterweise hat Princen als Künstler eine Architekturausbildung genossen, während Kerez seine Architektenkarriere als Fotograf begonnen hat. Welche Rolle spielt die Fotografie bei der Suche nach einem Raum, der die Beschränkungen eindeutig erkennbarer Autorschaft überwindet? ARCH+ Features 51 findet in Kooperation mit der TU Berlin, Institut für Architektur, Fachgebiet Baukonstruktion und Entwerfen, Prof. Regine Leibinger im Rahmen der Reihe POSITIONEN sowie mit Christian Kerez und Sandra Oehy/Schweizer Pavillon zur 15. Architekturbiennale Venedig statt.
10.5446/54122 (DOI)
Guten Abend meine Damen und Herren. Herr Zendner hat mich freundlicherweise schon vorgestellt. Bleibt mir das erspart. Ich darf Sie heute eben gleich drei Rollen hier begrüßen. Zunächst einmal im Namen von Siedle und mit ganz besonderer Freude alle Gäste, die unsere Einladung gefreut sind. Ich weiß, es gab weitere. Dann in meiner zweiten Rolle, die ich auch sehr gern wahrnehme, ist als langjähriger Initiativpartner und Förderer der Archplus Features Reihe. Wir begleiten diese Reiseite, sie gibt. Wir sind jetzt, ich staune selbst bei Veranstaltung Nummer 63. Wir sind Förderer der Archplus Features mit anhaltender Freude und auch mit Stolz. Denn wir glauben, es ist gut und es ist wichtig, dass auf einem Niveau für das Archplus steht, öffentlich über Archiktur gesprochen wird. Entschuldigung. Wir sind natürlich in dieser Förderung nicht ganz uneigennützig. Das versteht sich als Unternehmen. Wir haben etwas davon. Denn wenn man sich wie Siedler als Partner der Architektur, als Partner von Architekten versteht, dann tut man gut daran zu wissen, wohin sich diese Disziplin bewegt. Und das erfahren wir hier von Archplus Features. Dies führt mich zu meiner dritten Rolle heute. Das ist das Förderer des Vitra Design Museums und dieser wunderbaren Ausstellung. Ich möchte über die Ausstellung nicht sagen, lassen wir heute Abend berufen. Gestatten Sie mir aber ein paar Worte zur Motivation. Was bewegt uns so etwas zu fördern und so zu engagieren? Nun, wir haben als Sponsor immer zwei Motivationen. Die erste liegt auf der Hand. Wir wollen die Marke sieht, natürlich an einem guten passenden Umfeld präsentieren und bekannt machen, ein besseres Umfeld als dieses. Kann ich mir kaum vorstellen. Das ist aber banal. Diese Absicht teilen wir wohl mit jedem Sponsor auf dieser Welt. Die zweite ist möglicherweise ein bisschen spezieller. Wir wollen von solchen Engagements immer auch lernen. Das ist besonders. Wir wollen etwas mitnehmen und deswegen ist unser Engagement auch nie nur finanziell. Wir versuchen immer inhaltliche Anknüpfungspunkte zu finden zum Thema, die jeweiligen Veranstaltungen. Wir versuchen uns aber auch selbst einzubringen idealerweise und so ist es hier sogar beizutragen zum Inhalt der Ausstellung. Und in dieser Hinsicht war die Förderung dieses Projekts ein absoluter Glücksfall. Wir haben dieses Interesse sofort bei Anbeinung des allerersten Kontaktes bekundet. Wir haben sofort Gegeninteresse zu spüren bekommen. Wir waren im Gespräch mit dem Matthieu Grieß, mit dem zweiten Direktor des Vitra Design Museums, aber auch von Anfang an und durch das ganze Projekt hindurch mit den Kuraturen, mit Andreas und Ilcarobi. Und ja, das war befrucht, das war hervorragend und hat uns diese Gelegenheit, die wir suchen, zu lernen in erschöpfenem Ausmaßgeboten. Wir mussten uns auf Neues einstellen. Ich will überhaupt keine Details zu unserer Anlage verlieren, wenn sie die Ausstellung besuchen, werden sie vielleicht Gelegenheit finden, sich an wenig damit zu beschäftigen. Nur so viel, wir haben technologische Prototypen eingesetzt und das hat sich aber im Laufe der Arbeit als sehr viel bedeutender erwiesen. Wir haben ein ganz neues Bedienkonzept entwickelt, samt dem dazugehörigen User Interface. Und so kommt es, dass ich hier bei Vitra eine Anlage von Siedle sehe, die es sonst überhaupt nirgendwo gibt und von der wir auch nicht wissen, ob es sie jemals irgendwo noch einmal geben wird. Wir haben viel gelernt. Die Hoffnung gibt es natürlich selbstverständlich. Wir haben eine Menge gelernt und ich sagte, ich will sie nicht mit Details langweilen, aber stattdessen vielleicht vier wichtige Lehren, die uns bei Siedle ganz sicher länger als dieses Projekt beschäftigen werden und die vielleicht sogar über unseren Fachbereich hinausweisen. Die erste Lehre ist, es muss einfach sein. Es muss einfach sein. Wir wissen überhaupt nicht, was in einer solchen Wohnung ist. Es ist eine Musterwohnung, in der Leute zusammenleben werden, die sich mutmaßlich vorher überhaupt nicht gekannt haben. Wir wissen nicht, wie die leben, wer sie sind, was sie für Vorwissen mitbringen und wie sich dieses Zusammenleben gestaltet. Das heißt, es ist sehr dynamisch. Jeder Bewohner muss aber auf Anhieb verstehen, was er davor sich hat und was er damit anfangen kann. Das ist ganz, ganz wichtig. Das führt mich direkt zur zweiten Lehre, die wir mitgenommen haben und die zu beherzigen uns nicht leicht viel weglassen, ist eine Tugend. Das fällt einem hochspezialisierten Hersteller wie uns, der das schon ein paar Jahre praktiziert, nicht leicht. Unser tägliches Innovationsgeschäft besteht in der Regel aus immer weiterer Verfeinerung, aus Vervollkommenung und Erweiterung von Funktionen. Hier haben wir aber runde um Runde gedreht, immer wieder in Abstimmung mit den Kuratoren. Und runde um Runde ist eine Idee, eine Funktion, eine Möglichkeit und ein Tisch gefallen. Tut weh tut aber auch dem Projekt sehr gut und es ist wirklich ein Lerneffekt gewesen. Es kommt nicht immer darauf an, alles zu machen, was möglich ist. Stattdessen ist etwas rausgekommen, was wesentlich ist und wesentlich ist hier das, was diese Wohnform, die wir hier demonstriert bekommen von der Herkümmerin, die unser tägliches Geschäft ist, unterscheidet. Nur ein Beispiel dafür, wir mussten eine Taste denken, die alle ruft, nicht nur einen oder die den Gemeinschaftsraum ruft. Also es ist eine Tendenz, die sonst bei Türsprechanlagen, für die wir ja stehen, entgegengeht. Normalerweise stehen wir ja für die Vereinzelung, für den Rückzug zwischen die Schwelle zwischen Außen und Ihnen, Public und Private, ist eher eine Trennung. Hier war es eine Öffnung in beide Richtungen. Das führt mich zur dritten Lehre. Die Kommunikation unter Menschen ist durch Technik nicht zu ersetzen. Technik unterstützt sie nur, sie kann sie nicht ersetzen. Das ist für uns auch überraschend. Ich will, das ist abstrakt, ich will es dir mal konkretisieren. Sie hat im Portfolio eine so genannte Lieferbox, die Lieferbox dient dazu, dass sie Pakete auch dann annehmen können, wenn sie nicht zu Hause sind. Der Bedarf dafür liegt natürlich auf der Hand. Wir sehen da auch einen großen Markt dafür. Wir sind stolz auf dieses Produkt. Es ist hochkomplex, hochtechnisiert. Und im ersten Ansatz wollten wir es natürlich voller Stolz hier in die Ausstellung einbringen. Wir haben es nicht gemacht. Warum nicht? Wenn Leute zusammen wohnen, also wenn sie dieser Vereinzelung sich in Gegenstellen und neue Formen finden, brauchen sie es nicht. Sie haben ja einander. Was sie brauchen, ist Kommunikation. Sie müssen sich abstimmen, sie müssen sich abstrechen und sie müssen die Möglichkeit haben, dass einer für den jeweils anderen dieses Paket annimmt. Das ist im Grunde kein Problem. Deshalb ist unsere Lieferbox ein bisschen, so ein bisschen ein bisschen nicht hier, aber es ist ein gute Lehrer. Und jetzt komme ich zur letzten Lehrer, die daraus auch folgt, so wenig technische Komplexität wie möglich. Das Leben ist nämlich komplex genug. Und in solchen Wohnformen, wo Leute, die sich vorher nicht kannten, zusammenkommen, steigt die soziale Komplexität unvermeidlich. Lasst es uns diesen Menschen leicht machen. Lasst uns die Technik so einfach wie möglich machen. Und das ist etwas, was uns auch nicht immer leicht fällt. Denn es steht einem Trend entgeben, der sehr, sehr stark ist, von dem wir uns alle sehr viel Geschäft erhoffen. Die Intellektrobranche unterwegs sind nämlich dem sogenannten Smart House. Hochgradig automatisiertes, vernetztes und mit viel Intelligenzausgück stattdates Gebäude, das sehr viel übernimmt in der Technisierung, was eigentlich Menschen machen. Nun ist es aber so, wir selber haben auch Produkte für diese Gebäude oder Schnittstellen dazu. Es ist nun aber so, dass Intelligenz so wenige beim Menschen, so auch in der Technik nicht umsonst zu haben. Sie kostet eine Menge, sie kostet eine Anschaffung und sie kostet im Betrieb, nämlich Energiekosten. Wovon in der Diskussion um das Smart House und in der Euphorie selten die Rede ist, es ist aber einfach so. Und das ist eine der Gründe, warum wir, wenn gleich wir es hätten tun können, keine IP-Technologie. In diesem System haben so eine herkömmliche Technologie auch durchaus sehr bewusst als Beitrag zur Kostensenkung. Denn das ist einer der vielen Effekte dieses gemeinschaftlichen Wohnen, es schien uns der zu sein, das Grundbedürfnis Wohnen von den immensen Kosten zu entlasten, die im Moment darauf lasten und die es einfach verhindern, dass man es umsetzen kann. Damit komme ich zum Schluss und will sie nicht länger auf die Folter spannen, vielleicht mit einer Art Hauptthese, die möglicherweise auch in die Diskussion hineinreichen könnte, weil wir gelernt haben hier, dass Menschen, die zusammenwirken, sehr vieles von dem einfach überflüssig machen können, was wir für unvermeidlich unverzichtbar oder notwendig halten, nicht nur, aber ganz speziell auch beim Bauern, Planen und Wohnen. Und damit übergebe ich und freue mich auf eine spannende Diskussion. Vielen Dank. Well, first of all thanks a lot for this invitation. I'm very honored to be here and especially in the context of this extremely interesting exhibition, which was curated by Andreas Rubi. What is actually for me good is that the work I will present will resonate a lot with the theme of the exhibition, which is communal living. So when I was asked to give this short talk, which will be followed by a discussion, I was told that the theme was the relationship within public and private. And my presentation will be not directly on this topic, but for sure some very important issue will be raised about this issue of public versus public and private. So I was thinking to answer this invitation by showing you a research. We have been working a dogma in the last, let's say, five, six years. Of course it's a very condensed presentation. This title actually is living working. And the title of this research was referring to something that I think all of us are experiencing, which is basically the lack of any boundary, spatial and temporary boundary between what we can call life and work. I think unlike our, let's say, predecessors, for us it's more and more difficult to define a clear, let's say, limits of the time of work. We basically work all the time, even actually when we are not supposed to work. Because we work the moment we produce value and therefore value is captured in any kind of form. And of course we are not celebrating this condition. I think at times this condition is quite horrific, I should say. But our first question in this research was why, in spite of this condition, housing still pretender or continue to represent at least a difference between domestic space and all the other rest of our living space. Actually, as you know, the separation of the workspace and the house is a very recent phenomena. It's something that happened in the 19th century. So until then, actually, the relationship between living and working space was much more fluid and not really strictly defined. And it's interesting to see how today, actually, in spite of the fact that work has become so pervasive and ubiquitous within our life, most of the housing that is produced is actually still celebrating this kind of ideal, or pastoral ideal of domestic life. So, when we started this research, we realized that in fact the house has always been the ground zero of labor, even more than the workspace, which is actually the office, the factory, which is a very recent invention. And this has to do with the fundamental distinction that we have to make when we talk about labor. And I know this is maybe a very conceptual, philosophical distinction, but I think it's still very important. Between labor and work, which is actually a very tricky distinction to make in other languages. I don't know, in German it's Arbytes. But only in English, actually, you can make a distinction between actually what you can call labor and work. And as you might know, this distinction was made by a very important German philosopher, Anne-Naharend, who in the human condition, in fact, insisted on distinguishing laboring activities from working activities. So, work is actually what we do in exchange for a wage. So, it's something that is actually meant to produce something. And in ancient times, work was worth when it was leaving objects behind, like a poem, a painting, a pot, or a table. So, the more actually work was leaving tangible stuff, the more it was recognized as such. And therefore, it was also paid. Labor, on the other hand, is the labor of reproduction, the labor that is necessary to maintain basically our life. So, sleeping, cooking, cleaning the house, but most importantly raising children and basically taking care of the households. And actually, Anne-Naharend, remind us that in ancient Greece, where this distinction was very important, laboring activities were considered absolutely worthless, because they were not leaving anything behind. They were just like a painful toil, meaningless, because actually everything that was produced was immediately consumed, like, for example, food or cleaning. And in fact, this is why, the reason why, in spite of their incredible intelligence, philosophical intelligence, at least, the ancient Greeks justified slave-ship. Because actually, that was the only way in which they could somehow liberate themselves from the burden of domestic labor and delegate this domestic labor to slaves or to women, who, as you know, were not considered citizens, and therefore not worth access to political life. And for the Greeks, political life was in antidezis with actually the burden of reproduction. So, the house, the invention of the house, as we know it, can be considered a place that was meant to organize the labor of reproduction, through actually the organization of space, so the distinction between the different spaces of the house, for example, the distinction between the kitchen and the bedroom, if you want. And this comes through what we can call domestic space. Domestic space is a space that ritualizes this, let's say, the work of reproduction, and by ritualizing, naturalize, if I can use this term, the job of reproduction, makes actually reproduction not something that looks like work, but something that is almost the everyday condition. This is a very important phenomenon, I would say, one of the most perverse phenomenas of our history, because in this way, what actually is, in fact, work, which is domestic labor, one of the most painful and more stressful form of work, I would argue, still today, which is like to maintain a family, is actually not considered work, and therefore is not paid. Actually, if capital would have to pay domestic labor, it would collapse in five minutes. And in fact, historically, capital has absorbed, has captured domestic labor for free. And the architecture of domestic space has played a very important role in this process by, in fact, naturalizing this condition, making this condition a normal condition, to the point that we don't question ourselves anymore why we live in houses, which for me is actually a very interesting question. As you know, actually, the history of humankind only discover the domestic life at very certain point in its history, so it's not something that was always there. So, how that happened? Here, I will show you just one floor plan, which I consider a very important document in the history of domestic life. This is actually a project by Henry Roberts. Henry Roberts was not an architect, he was a reformer in the Victorian era in England. And this is a very important project, which he managed to build as a housing model for families, which he, this is actually the only one floor plan, so it's only two families, what you see here in this plan. He managed actually to build this model just in front of the Crystal Palace in the Universal Exhibition 1851, which is the Apotheosis of, let's say, of bourgeois ideology, if you want. So, just in front of the entrance, there was this actually one-to-one realized model. And here, actually, what you see in this model, first of all, is for the first time, workspace is completely evicted from the plan. I mean, it would even be impossible actually to work in such a plan. There is a careful choreography of the different rooms. For example, there is a distinction between the children's rooms and the parent's rooms. So, in a way, architectures start to really precisely individuate each member of the family. But what is actually very important, that there is no whatsoever communal space shared by the different household. The communal space is reduced with a feature that is actually very common today in many housing projects, which is the core, the entrance. And what is very important, that every house has a kitchen. So, something that often was shared and was a much more fluid infrastructure within the household, it's not only actually very much defined by the core, but it's also individuated for each family. So, each family becomes a kind of microcosm, completely detached from the other families. And this is really the space where the state delegates the family to control itself. So, the family becomes really this kind of micro unit, the nuclear family, perfectly individuated by the house. Whose, let's say, job is to reproduce itself, to actually reproduce itself without, let's say, the public sphere. So, in a way, here you have the invention of privacy. As we know it today, which is really this kind of removal of the house from any form of public interaction. Something that maybe we now we take for granted, but I can tell you that until the 18th century was not at all the case, especially within lower classes. And of course, this remorse, this model remorse three fundamental categories that will be very important in the modern conception of the housing project. The first one, of course, as I said, that the house is a place of labor of reproduction, that shouldn't be considered work. So, the idea, the invention of domesticity, intimacy, and privacy are exactly those ideologies that are meant to reinforce this kind of condition. Everything that we do in the home is not public and therefore should not be, let's say, a matter of discussion, of political confrontation, but is actually, happened actually away from this possibility. The second condition, very important, is the reinforcement of the idea of family. And family actually is a very tricky concept, as you know, it come from the Latin word family, which family were the slaves. So, the family is a congregation of slaves under the, let's say, it's true. I mean, that was actually how the Romans understood family values, so that there is actually a father of familias, who is actually absolutely responsible of the property and the life of the house. So, in a way, when we talk about domestic space, we talk about a vector of command, I mean, domestic space come from dominus, which means, in the domestic space, there is always someone in charge, whether it's the father, the owner, or the landlord. So, family is really that kind of political unit that organize that command, and makes actually the father familias also responsible for the family. And actually, what is interesting, that we, you know, today actually, we really witness a kind of friction between on one hand the undermining of family values when it comes to new forms of production, so sharing nomadic life. At the same time, we also have a kind of return of family values as a kind of way to preserve a potential disintegration of society. And actually, what is interesting, that even those movements that in the past were fighting against family values, in order to be part of any form of welfare, today actually are asking to be part of this kind of social contract, which is actually the family. And finally, the third element of this trinity, which defines the subjectivity of domestic space, of course, is the concept of property. Now, property actually is a very tricky concept, because it's not a thing. Property is a legal, let's say, a paradox, in which a very strong institution, like for example the state, can grant someone, becomes the owner, to possess something, to not just use something, but to possess it. I mean, this is actually a very important definition of property, because when it comes to property, it doesn't mean that we, if we own a house, we are not just using the house. So, when, for example, we leave the house, because we go in another place, the house remains ours, even if we don't use it. And property is a legal concept that preserve this kind of condition. So, if someone owns something, this ownership is not related to the possibilities of use. So, for example, some of you can be completely thirsty and needs water, but because I own the water, I can refuse you to drink this water. This is actually what property in the form of ownership means. And this is actually why, especially in terms of, in the history of housing, property is a fundamental problem. And housing, and especially the idea of domestic space, is a history of reinforcing this idea of ownership, not just as a legal concept, but also as a psychological concept, as a way to belong to a place. So, if these are the premises of our researcher, the goal, actually, of the work, the project that we have developed in the last five, six years are, let's say, housing models beyond both the concept of private property and family. Well, I just want to say something about family. I mean, I'm fine with my family, so I don't think that this is actually a sort of Freudian struggle. I have a wonderful relationship with my family, but of course, I always question myself why that is the only social form of association that is acknowledged, recognized as the cornerstone of society. I mean, this is, for me, a very hard question, almost a taboo, that I think we need to start to challenge. So, the project that I will show you, we'll show you only one project, because of course, the time is very limited, is a project that we did for an exhibition, which took place in Berlin, I think, two or three years ago, at the Haus der Kultur und der Welten. The title of the exhibition was the von uns frage, which is a title of a very famous pamphlet by Friedrich Engels. And our contribution to this exhibition was a collaboration with a group of artists based in Frankfurt and Berlin. Their name is Realism Working Group, and they are called Realism because they are very interested, not just in producing art, but also questioning the form of life, the conditions of production and living, in which they happen to work. And this was actually for us a very interesting aspect of their work. The project very briefly is a villa, is a model, is a prototype for the villa. Villa is, as you know, the quintessential domestic space. And that's actually why for us it was very important to work on this model. And the idea is that this villa, which is a repeatable archetype, could host something like 45, 50 Hinappidanz, who would actually decide to live together, and live and work together, not for a short period of time, which is usually the way in which collective living works, but for the rest of their life. So it's a kind of really project, really rethink drastically the idea of domestic space. So the villa for us was a very important precedent to work with, because as I said before, we can say that the villa is really domesticity on steroids. It's really the housing model that really celebrates the idea of the domestic. In fact, we can say that if the ancient Greeks invented the Agora as a political space, the ancient Romans invented the Villa as really the rejection of political space. So, I'm from Rome, so I feel very proud of this regression. In fact, the Romans were very, they didn't like politics. And that's actually why the villa became such a popular housing model for the elite, especially, because it was a way to go away from the city, and to basically construct almost an idyllic community. In fact, villas are collective places, meaning that they celebrate the idea of community being outside of the city. But of course, in a very pastoral and ideological way, for example, here you see a very important Roman villa, the Villa San Rocco. You see that there are two buildings, of course the Villa Prober, which actually was the house of the owner and his family. And then there is the house for the slaves, which were necessary, the slaves in order to maintain basically this outpost outside of the city. So in a way, the villa celebrated the idea of isolation and living in this kind of idyllic countryside. But all the toil and labor necessary to maintain this community was basically hidden away in the back of the villa. Another example of this condition is, of course, very famous, the Villa Capra by Andrea Palladio, known as Rotonda, which actually celebrate domesticity almost with religious features. As you might know, the Villa Rotonda is the first building to have the dome as a feature. So really transformed the house into a temple with four entrances. All the domestic labor was actually hidden, as you can see in the section in the plinth, that supported this pastoral temple. And this actually has been really always the ideology of the villa and later on the cottage and the single family house to celebrate domesticity, to celebrate the family, but to hide away the burden of domestic life, of domestic labor and also to transform every citizen into an owner and to lock the family in this kind of island of ownership. So, our model was trying to challenge this condition. We had a precedent in our project, which is a very famous research project by Oswald Matthias Ungers, a series of urban villas that he proposed for the city of Berlin in the late 70s as an alternative between the single family house and the social housing estate, which at that time was in trouble. And for him actually the reason why he was very interested in the urban villa, because the urban villa, being this kind of middle scale artifact, would allow much more cooperation among the inhabitants of this building. Unfortunately, he didn't actually develop further this model and this is actually where in fact our research was trying to develop the concept. I want to just discuss the three fundamental criteria, which I think were very important for this project. I would say this criteria for us were even more important than the project itself. And I consider this criteria as really fundamental if we want to change the conditions of housing from this kind of ideology of privacy towards a much more public and political space. The first one is the model of collective ownership, so no individual owners. And for me the most interesting example of this model is the syndicate model, which in Germany exist in different forms. For me the most interesting one is the mid-sauser syndicate, which is a union of syndicates that support tenants that decide actually to live together to share their resources. But the syndicate also act as a legal framework, allow actually these people to access for example loans, to not become individual debtors, which is very important. But also in exchange the union prevent single tenants to resell their property. So the syndicate has a right to veto this condition. And in fact within this model we can say that the house, as was said before, from an object of market conditions of exchange value becomes an object of use value. Because it's actually a withdrawal from the market. This is actually why, for example, many associations, tenants association under the mid-saisers syndicate sometimes have problems to get loans from banks. Because banks actually knows that projects like this are aiming to actually exit in fact market conditions. The second point of course is very banal, but I think very important at the same time, which is the reduction of construction costs, which in our model we try to achieve by using industrial construction. And therefore avoiding the finishings that often increase quite dramatically the cost of housing and often are simply ways to basically domesticize. If you allow me this Game of Words domestic space. And the third criteria, which actually for us is very important, perhaps the most important one, is the possibility to lease public land for non-profit user. This is actually for us very important because as you know, one of the things that makes housing not affordable, is not the house itself, is the cost of land. Today actually land, even public land is used as a commodity. There is a lot of commodification of land happening at the moment, which is one of the most dramatic aspects of our contemporary cities. For example, we were mapping all the land owned by the city of Berlin, which is interesting. There is no cadastral information of this public land, I mean, there is, but it's very disperse. There is no public knowledge. In many cities there is no public knowledge of what the city owns. And often this land is sold for a very cheap price to profit speculation. So one of our proposals was that if, because these housing projects are not for profit, the city can lease for free the land, not sell the land, but lease for free for 70, 100 years, so that basically this would in fact reduce dramatically the cost of housing. These actually are some of the sites that we selected, all very close to public transport. Und actually what we discovered that they really had that sort of bucolic, idyllic, if you want appeal, where a villa would sit very well. So, very briefly the project is basically an open plan. And this was for us was very important to not have a fixed load bearing walls, but basically to have the houses a kind of hippo style space. And then there is actually just one element that define the space, what we call an inhabitable, a series of inhabitable walls that in fact divides, not programmatically, but more specially a more, let's say, secluded possibility for secluded space to the more common space at the center. And this living, inhabitable wall for us was also a way to concentrate as much as possible all the services and furniture in one object, which is actually really in between the scale of architecture and the scale of furniture. We are very, at the dogma we are very interested in this idea to return to kind of mediaeval house, where in fact there was no this kind of strong dichotomy between furniture and architecture, which is actually very strong with modern and contemporary domestic space. In the mediaeval house actually the furniture was often part of the architecture itself. And this is really the idea behind this kind of inhabitable wall, which contain an alcove storage outside, but also on the back, and a generous bathroom, so that the space, both in the public and the more, let's say, individual space, are completely not freed from any determination given by the furniture. You know, furniture sometimes really give a programmatic meaning to space. Now in fact we call the bedroom, the bathroom, the bathroom, because in the modern house, what gives actually often the program to actually a space is the furniture itself. It's interesting to say this in a place that produces furniture. So this is actually some of the pictures. We realize a one-to-one model of this inhabitable wall for the exhibition. This was actually an artist for a few days performing the life inside this building. This is the section that shows the very generic quality of the architectural space and then just the living wall performing this kind of separation that in any cases can be removed, because at the end this wall is a piece of furniture. I will just quickly go through the images of the space in between, which is divided in floors. We wanted to question the distinction between production and reproduction. Also in the ground floor you have the sauna just next to a workshop. Also together with artists we started to discuss what is actually a working space for artists. We realized that a lot of them complained, let's say, the lack of, in their traditional household, the lack of cooperation among people and groups, especially when it comes to take care of children. So we decided to imagine the studio spaces as a space that really blurs the distinction between production and reproduction, that can be partitioned in different spaces, but these partitions are always femoral. And that this space is not just a space for art, let's say, but life itself. The reproduction of life is celebrated as the most important artistic process, that in fact maintains our capability to work. But also where it's possible for the artists and their families and their, let's say, friends to take care of each other. So these actually are just a few more images of the villa in context in this land that is no longer actually owned, but used by these buildings. And that's it, thank you. Andreas, you also have been working on the housing question, so to say, for a long time. I remember also Pierre Vittorio, really speaking at your conference, Minne2Max. Very good. Minne2Max in Berlin, a few years back. So it's a theme which constantly comes up again. And today I was really amazed to see that you've become in a sense a director of a e-bar, a virtual e-bar, at the museum, which is really nice to see how different concepts of housing come together to form a new kind of city. And maybe can you just react back to what Pierre Vittorio just said. Also in our exhibition we are preparing the three main themes for us, the collective living, but also the collective ownership and also collective production. And this is maybe the third theme you didn't mention. This is something I think the collective production of housing is also something which is contemporary. And which is also important in some of the projects you've shown. Can you maybe speak about those three aspects of the theme a bit? Just remind me of the three ones, because I was thinking, I was developing a train of thought based on the first part of your argument. Yes, so collective living, collective production and collective ownership. Okay. Just keep on one, that's enough for the time being. I think basically what people are more and more about to realize is this basic distinction that Pierre Vittorio was drawing upon in his talk, then of course goes back to Marx's analysis in the capital, which is the distinction between Gebrauchswert und Tauschwert, the use value and the exchange value. So we realize that the real estate market is basically oblivious to the notion of Gebrauchswert, of the use value of space. They get away with selling us shitty spaces for lots of money. And we have accepted this kind of insult for too long. And it seems that more and more people are not willing to do this anymore and realize what they actually need. And are starting to define needs, what they want. And what I realized through the research that we've been doing in the exhibition was really that maybe in the past 10 or 15 years, there has been this very interesting dynamic of empowerment of the actual subject of housing, which I would consider the user, so far has been actually left out of the equation. Either you were a client and you had the power to basically define a project, or you were an architect and you have the power to materialize those forms in dependence of the client. And the user has been this kind of missing link in between that basically had to accept whatever these two guys were able to come up with. And the reason why we focused in the exhibition in the very first space on the squatting movement of the 1980s is that this is one of the historical moments where this third agency materializes itself, the user, where they say stop, we're no longer willing to accept that we don't have housing, but some property owner, you were really defining it well, the one who has the power to decline me, the right to use something, decides to leave a building empty in order to speculate with it and to make it more valuable. So this moment in the 1980s, I have to say, moves me a lot. And actually we conceived this very first space in the exhibition as an homage to these people, because they risked a lot by investing in another concept, by of Hannah Aran, which is civil disobedience, civil ungehorsam, protesting against the powers that there were, the real estate market property owners, the city, and the police, because they had a lot of conflicts with the police. Also in Basel, for instance, the place that I live in, the Varteck, was for two years a site of civil warfare. Every weekend, the policeman would encounter artists who squatted the building and be attacked by tearguards. So this is really interesting that there's a prehistory to this kind of renaissance of collective housing that I find almost diagrammatically encapsulated in the project that you were showing. And in that sense, I think it's interesting that we see these kind of projects that we are able to show in the exhibition only because of this process of social empowerment where users, that used to be objects of domestic housing, of production of housing, decided to turn themselves into subjects. And this is probably one of the most exhilarating and most optimistic moments, I think in the recent history of architecture, because we can actually see not only that this leads to more social equity and equality, but also to a great deal of fantastic architecture. We've seen, I think, through this type of user empowerment, a revolution within a typological array of domestic architecture that actually 10 years ago I wouldn't have thought to be capable. If 10 years ago somebody would have shown me the Kalkbryter Project in Zurich, I would have thought, this is great, but that student of architecture was obviously very interested and inspired by some Soviet revolutionist architecture from the 1920s. But there's no way how on earth this kind of stuff could get built in the real world, which means our world. There's no way how you can build a housing building with 40% of the volume dedicated not to housing. That's what I would have thought 10 years ago. And now we see, no, it is possible. So that's why I think all the stuff that Pia Vittoria that you've been showing, that many people also 10 years ago said, well, this is just really utopian. This guy couldn't get out of his mental kind of asylum in the 1920s. Isn't utopian. And it's maybe just about to materialize as an actual option of doing things. So, that's a very nice perspective. One thing which struck me was that you've decided to choose artists as the people who would live in and work in these kind of spaces. And I just wonder if we have to become artists to solve the housing problem, the question. So, have we been become artists anyway? Ja, no, I think it's a very important question, which I didn't want to discuss in the presentation, because it would take too much time. So, thanks for the question, because I think it's an issue that... Well, first of all, the reason why we choose to work with artists is that it's a very geographical condition. Insofern, ich kenne viele Artis, nicht die famousen Artis, aber viele Freunde, die Artisten sind. Und heute, wie ihr es kennt, ist es ein sehr widersicherer Artist. Man kann Wienerbete produzieren, oder man kann ein Architekt, ein historischer Designer, ein Poet. Für mich ist der Artist nicht der Dieser exhibiz Action either zu keinem und gloveahuere Wir haben einen großen Arbeitsraum mit vielen Zusammenarbeitern und wir leben und arbeiten in diesem gleichen Raum. Das ist eigentlich warum für mich die Artistin interessant ist. Es ist ein Formel der Leben, mehr als eine Formel der Produktion, das ist similiar und auch für die Menschen, die nicht, sagen wir, Artisten sind. Das war für uns eigentlich warum das sehr wichtig war. Weil diesbackgroundisch das Warmezang von Label und Arbeit najcie participar. Man also noi, dass esilight Trilateralizes province zur kommende N объktion eine Gelegenheit zu sorgen ist. Man wir mold acrente diese Region ein, dass die Menschen heps storytellisch Die 2014ische Aufgabe der Furgren k precio. Herr Die Art ist Leben. Die Art ist Mail und Konzentration auf Arbeit. Und die Frau ist die Person, die alles aus dem Leben nimmt. Es ist ein sehr famoules Fotograf, das ich denke, dass du das Famous Fotograf Hans Namuths, ein sehr wichtiges Amerikaner, der einen famülichen Reportage von Jackson Pollock für das Live-Magazin macht, wenn ich nicht falsch bin. Und es ist ein Gesetz von Fotografen, wo Jackson Pollock, in der Arbeit auf einem sehr großen, industriellen Arbeitsplatz, komplett am Domestik, und natürlich, mit den Famous Drip-Pings, die es möglich sind, nur wenn du einen großen Studio hast, das nicht das Haus ist. Und dann ist ein sehr vertrautes Fotograf, der seine Frau, der eigentlich auch ein Artist war, die Lich Rosner, mit ihm auf die Seite, fast wie wenn er nur die Freiheit hat, diese Lüftung zu beschreiben, die ganze Welt zu explodieren, weil Domestik-Leistung komplett entgegen ist und separiert ist. Und natürlich wissen wir, dass heute für viele Artisten diese Situation dramatisch verändert ist. Wir haben den sogenannten Kitchen Table Artist, der ein Studio nicht erfordert hat, und das hat einen großen Impact auf die Weise, in der Artisten auch zur Gesellschaft und auch in der Art, die sie produzieren. Für uns war das ein sehr interessantes Beispiel. Für uns ist nicht nur die Artist, sondern auch die Artistin's Leben. Für uns ist ein sehr wichtiges Beispiel für eine kontemporarische Form von Leben, die man über die Artistin-Produktion realisieren kann. Ja, also insofern, Sie haben mit den Frauen aus dieser Artistin-Leistung gesprochen, und wenn wir über die Gemeinschafts- und Gemeinschafts-Kommunität reden, über diese Konzepte, wir müssen auch über die Inklusion und die Erklusion, der diese Gruppe, die wir zusammen sind, über die Besonderheit der Besonderheit. Und haben Sie die Gruppen, die Sie in der Forschung gemacht haben, auf das, was Sie haben, was Sie haben, weil Sie natürlich Geld brauchen, und die Frage, was Pierre Vittorio mit dem Häusern-Zenikat hat, ist, wie man die Schwelle, die diese Gruppen, wo Sie sich anrufen können, minimieren? Ja, ich glaube, das war eine der Gründe, warum wir das Exhibition gemacht haben, mit dem zweiten Spass, mit dem großen Stadtmodell. Wenn Sie das Show sehen, wissen Sie, dass wir diesen gigantischen Modell in einem 1-24-Skale haben, das 21 Projekte, die wir auf einer kontinuierenden Urban-Spass haben, in einem gesetzlichen Urban-Spass. Das ist sehr wichtig. Weil wir in der Forschung und der Exhibition interessiert sind, nicht wie sich Kohausenprojekte sehen können. Das ist interessant, aber man kann das schon und das studieren. Aber wie wird die Kohausen-Serie sein? Es gibt einen sehr seltenen Paradoxeffekt von Kohausenprojekten, wie sie in der Stadt verstanden sind. Das bedeutet, dass man ein Projekt, das auf den Kohausenpräsenten funktioniert, das alles über die Haltung und die Menschen zusammenleben, ist in der normalen Stadt als eine isolierte Kondition. Es hat einen wirrigen Potential, einen involuntären Gehäuse der Zusammenarbeit, in einem Verbrechen des, sagen wir mal, Indifizierten. Es ist oft gegen die Wille der Protagonisten, die diese Projekte wie diese superspezialen Konditionen, die nicht in diesem Welt sind, und die falsche Utopien werden. Das ist natürlich nicht interessant. Kohausen ist interessant, dass es aus dem Moment, dass es die Boundarissen des physischen Objekts zu überlegen, in den Fähigkeiten des urbanen Spaces, die es umfasst. Wir haben gesagt, ok, denken wir mal, und wir versuchen Materialität, wie eine Stadt so aussieht, wie eine Stadt, die sich in einer Stadt verursacht. Das wird kompost für Kohausenprojekte als eine generische Prinzipie, das bedeutet, dass jedes Haus mehr als einfach die Produktion der Apartmenten, aber wo jedes Haus Funktionen und Programme, die nicht nur die Inherbite der Menschen, sondern auch die Menschen, um die Menschen zu verursachen. Und wenn man das hat, und das ist was wir versuchen, in diesem Modell zu zeigen, das Spass, das die Häusern separiert, das Bündnis des Bündnisses, wird mit den Wahlen von Verbrechen und Zusammenhalt infiziert. Und aus dem Moment, ich denke, das Strafverkauf sinkt. Und was wir auch realisiert, ist, dass viele Projekte, die jetzt realisiert sind, in diesen, als diese kindenisolierten Konditionen, weil sie keine andere Chance haben, als das, versuchen, die Strafverkaufsstrategie zu erhöhen, die die Strafverkaufsstrategie und die sehr ausdrücklich verursachen, z.B. für die Funds, die die Menschen, die nicht die Funds haben, zu verursachen, in die Kooperative zu kommen. Man muss, sagen, 10.000, 15.000 Fr. oder Euro in eine Kooperative zu kommen, und man kann das nicht haben. Aber man muss, und man muss, diese Art von Lebensmittel, insofern man viel braucht. Und insofern, um diese Menschen, die das Projekt haben, die die Menschen, die die Fund haben, zu verursachen, mehr Geld für ihre eigene<|ar|><|transcribe|> Heute Effekt, um diese Projekte zu verhindern und die Mechanismen, die sie verhindern, zu verhindern. Was wir in unserer Forschung anschauen, ist zum Beispiel das Werk von Marcel van der Angeles, der sagt, dass das Kapitalistsystem sehr gut kaputt ist, die kleine Veränderungen, die micro-Veränderungen, und die Komodität aus dem Kapitalisten. Wir wissen alles über R, B und B und so was, was Sie sehen. Ich denke, dass das eine der größten Fragen, die wir im Ausland sprechen, ist die Ökonomie des Hauses. Für alle uns ist das eine der größten Fragen, um wirklich vorzuwenden. Aber diese Micro-Veränderungen haben einen großen Impact auf das System. Diese Einzelne Projekte können die Weise, wie wir über die Hausung denken, wie Sie es vorhin gesagt haben, wir haben nie über diese Projekte vor zehn Jahren gedacht. Ich denke, dass sie die Realität und die Verteilung haben, dass sie diese gute Translifizierung machen. Aber ich sehe die Leute verlassen, so rüber. Sie haben einen wichtigen Skype-Call zu Vietnam. Aber noch eine Frage für Pierre Vitoye? Ich habe es eigentlich viele Mal gedacht, um es zu machen. Mit Yguard Saint-Martin, wir haben gedacht, das welches Wirshilfe in Eis Court zu beginnen. Wir haben mit diesen Projekten einen inspired 느�zten Weg bezahlt, bei wären die Lauter-Duellung in immer den Mauertalern alles klar gef enhancescht. Ich denke, dass Ihre Frage sicherlich richtig ist. Ein Projekt wie dieses ist nicht nur ein Projekt der Architektur, sondern auch der Design, der Bildung. Es ist wirklich ein Projekt, in dem man sich die legalen Främiere, die Organisation der Leben, die eigentlich dort leben, und die dort leben. Aber auch, wenn ich das denke, und das vielleicht auch der Professor, weil das auch mein Job, dass es eine Artikulierung ist, die sehr architekturisch ist, und dass ich denke, dass Konnexionen mit den politischen und politischen Ekonomen und sogar mehr anthropologischen Fragen zusammenführen, dass es in einem Weg die Architektur, in einem bestimmten Weg in die Organisation der Space, und eine bestimmte Idee der Organisation der Space, z.B. das, dass die Hausschrauberthalte eine Kette, ein Barsch, ein Wohnraum haben, die wir für die Gründe nehmen, sind in den Fällen Dinge, die artificiell konstruiert werden, die wirklich durch die Architektur, durch den Beton, der die Architektur in der Leben betrachtet, eine große Kontribution in uns, die wir auf bestimmte Hauskonditionen betrachten, und genau diese Hauskonditionen, die dann die Mechanismen, die die Kommodifikation durchführen, werden. Ich gebe Ihnen ein Beispiel. Wenn Sie eigentlich in San Francisco sind, weil es eine Stadt ist, die in vielen Hausschrauberthalten sind, dann ist es eine Stadt, die in den Hausschrauberthalten ist. Es gibt viele Hausschrauber, die in den Hausschrauberthalten sind. Das ist ein sehr starker Tradition des residential hotels. Ich weiß nicht, ob Sie mit residential hotels oder familiar sind, aber residential hotels sind eigentlich Plätschungen, in denen man als ein einziger Mensch leben könnte, sogar wenn man ein alter Person sein würde, also in einem studierenden Hausschrauberthalte, und dann was anderes, was man noch hat. Das war in den US gebaut, besonders in San Francisco, weil es eigentlich viel Migration und Social Mobility gibt. Und diese Gebäude sind auch sehr wichtig für alle, die sich oder die, die in einem traditionellen Haus leben, viele Artisten leben, sogar in der olden Age, in den residential hotels. Das meiste, das ist das Chelsea Hotel in New York. Und dann, all of the sudden in the 90s, these residential hotels disappear, literally. In the span of eight years, many of them are closed, because for many reasons, legal reasons, like hygiene and things like that, some of them are burned down, apparently by accident, but people speculate that there was some kind of intention to really destroy them. And when this kind of residential hotels appear, something else appear, which is basically the microflats. The microflat is very similar to residential hotel, but what is different is that first of all it's very expensive, while the residential hotel was very cheap. Third, that in a microflat, each flat has a, even if it's one room, has a kitchen and a bathroom. And so, you can imagine living in a small room, where your bed is just next to the kitchen, it's not really the most nice way to live. Plus, this means that you have to take care of the households, so all the sharing actually is gone. So when I asked the developer why he couldn't at least make the kitchen communal, he told me, it's a good idea, the point is that if we would tell the bank that we would make actually a communal kitchen, they wouldn't give us the loan for this project. Because once you have something that is an infrastructure being fixed in one part of the building, this actually makes sure that the building will never become a residential commercial typology. So for me, these are really architectural questions. I mean, that's why for me it's important perhaps to do actually what you suggest, but at the same time we shouldn't forget that there is a kind of disciplinary aspect of this problem, which we also have to somehow continue to study and to learn from. The reason why I answer this question, when do you start your own cooperative, is of course I'm not suggesting that you have to do it yourself. But the main question is really when do we start to build the next world? Or when do we start to create the alternatives that were longing forward? And I think in the architectural community there is always this great dissatisfaction with the possibilities and opportunities that architects are given by the status quo, developers, clients, and you name it. And at the same time there is this incredible capacity to provide scenarios that could be realized. It's somehow they may never really make it into the real world. And what was really so exciting about this exhibition was to see how most of these projects actually started. And it was by the initiative of people that were coming from all walks of life, had no relationship whatsoever before to architecture or development, and only where what Heidegger would call the vonender, people who live. And people who live in such a way that they have an ambition, they say this isn't good enough. What you think is a choice for me isn't one. And I want more choice. And what I find really amazing is how they basically managed to translate their ambition into a reality. And I think somehow I wonder, can architects not learn from this kind of bottom-up naivité that the impossible maybe isn't impossible. So that's why somehow I have the feeling we have to go for it. I mean one of the biggest and most important urban developers of housing right now in Zurich, a guy called Steff Fischer, was one of the activists of the early housing squat movement in Zurich. He started the Catago commune, which is a very influential and important commune. And I talked to this guy and said, how did you acquire the knowledge of making these big projects? He's consulting the SBB now, the big train company here, on all of their big projects, which basically don't work. So when a project doesn't work in terms of public acceptance, he calls a guy who used to throw a lot of cocktails and paint bags in the 1980s. So, something has changed. So, what I'm wondering is, there is a sense of self-empowerment. So people basically say, if I want, if I think there is an alternative possible, there must be a way to make that. I'm wondering, do you think there's a way how architects can learn from this type of self-entitlement? And just go for things that they personally absolutely believe in, but have only problem in convincing other people. A very short answer, because this is a call for action, so we should start doing something. Yes, of course. My answer, yes, of course. Okay. That's too short. Yes, but I think that for me, it's difficult to give a short answer. But the problem is that, in a way, there is still a general perception that these models, and also you show in the exhibition, which I agree more and more, are somehow exceptions. I mean, there is a large part of society that is so much entrenched in what I would call the theology of domesticity. And actually sometimes you see this ideology at the end coming back also in these models of collective living. I mean, where in fact, there is also a risk that these models becomes a kind of, sort of, they don't radically question the premises that produce the commodification of housing in the first instance. So I think that yes, my answer is, I mean, I share your optimism, I share your, let's say, eagerness to expose these models and to say, okay, this is something that shows that this kind of new way of life is, new forms of life are possible. My, perhaps, let's say, issue, my question is, can we use this initial, let's say, process to question more fundamental issues. For example, about housing itself, why housing is a commodity and why housing is a scarce thing. It's something that, in fact, many of the projects of communal living today are not actually initiated because people really embrace communal life as something that is better than living. But many people actually ended up in these kind of projects just because there is no alternative for them to afford the house. And this is the problem. For me, the problem that we should question is, why property has to be applied to housing. If we don't answer this question, we can invent all the smartest way to make communal living, but we will not actually solve the fundamental problem of living. So, die Wohnungsfrage ist mit der Bodenfrage verbunden. So, that's a good one to end. I would like to ask you, are there any comments or questions right away? Otherwise, the buffet is waiting. No, but you have to give them a chance to formulate a question. Yes. So, this is 10 seconds to think of a question. Come on, pressure's on. Yes, one. Hey, maybe these are the people who need to answer. Es ist eine Anmerkung. Ich bitte Sie es auf Englisch zu übersetzen. Er versteht Deutsch. Die Bodenfrage ist zweifellos ein großes Problem, aber die Bodenfrage hängt mit dem Kapitalismus zusammen. Und dieser ist ein Bewusstseinsproblem. Did you get this? Okay, he was thinking about the question of the ground, like the Bodenfrage, you know, as Marc says. So, basically, that's related to capitalism, which is a kind of conditional consciousness. So, can we discuss one without the other? Well, actually, it's a very important question. In fact, when they invited us to this exhibition in Berlin with the title Die Wohnungsfrage, I was really not shocked, but I was surprised that this was the title of that exhibition, because the exhibition was essentially about housing. And as you know, in this pamphlet Engel said clearly, that's to solve the inequality of capitalism by building houses is a kind of bourgeois utopia and just a palliative that do not allow to arrive to change the only thing that we need to change, which is the economic system itself. But one thing that both Engels and Marx completely ignore in their analysis of capitalism was the importance of domestic labor. Actually, the whole tradition of what you can call orthodox Marxism has always actually recognized the workers in the factory or in the office or the workers outside of the home. And they always negated the importance of domestic labor and how actually you can change also conditions outside of the home by changing conditions inside the home. And I think for me, this is actually a way in which Die Wohnungsfrage, a question by Engels can be rethought today by thinking the house, the home, domestic space as the ground zero of political conditions. So, thank you very much for this. We were just about to ask when to start the revolution. Because... What has become clear is that it's a political question. We have to doubt to deal with. And with that, I would like to thank you, Pierre Vittorio Aureli for your wonderful contribution. Thank you again, Andreas Rubi, for your intervention here and for the wonderful exhibition. So, who has not been to the exhibition Go, it's really worth visiting. And also, I would like to thank Mr. Tsinder for being our host today. It's really wonderful to be able to travel not only Germany, but also to other countries to do these kind of discussions. And that would be without... wouldn't be possible without the help of Siegle. So, thank you very much, Mr. Schruve and Frau Weiß for your help and also thank you for being part of this event. And have a good evening. I think there will be some wine and some snacks. So, thanks.
Donnerstag, 29. Juni 2017, 18:30 Uhr Vitra Design Museum, Schaudepot, Weil am Rhein Kollektive Architektur holt die Öffentlichkeit in das Wohnhaus – und versteht Wohnungen als öffentliches Gut. Wie verbinden sich Privates und Öffentliches hier auf neue Weise? Diese und weitere Fragen diskutiert Pier Vittorio Aureli, Architekt, Professor an der Yale School of Architecture und Mitgründer des Stadtplanungsbüros und Think Tanks Dogma, mit Andreas Ruby, Kurator der Ausstellung Together!. Moderiert wird die Diskussion von Anh-Linh Ngo, Herausgeber der ARCH+, Zeitschrift für Architektur und Urbanismus. Das Gespräch findet auf Englisch statt. Die Ausstellung Together! Die Neue Architektur der Gemeinschaft, kuratiert von Andreas und Ilka Ruby sowie EM2N, widmet sich einem Thema, das Architekten immer wieder beschäftigt hat: dem Verhältnis von Gemeinschaft und Privatheit. Gesellschaftliche Veränderungen und vor allem der Strukturwandel der Arbeitswelt haben stets neue Konzepte erfordert, die das Verhältnis von Wohnen und Arbeiten, von Familie und Gemeinschaft, Privatheit und Öffentlichkeit neu justieren. Dabei stand das Kollektiv aus unterschiedlichen ideologischen Richtungen immer wieder im Fokus der Untersuchungen. So war das radikale Konzept der Phalanstères von Charles Fourier eine Reaktion auf die beginnende Industrialisierung, deren Arbeitsbedingungen die Disziplinierung sowohl der Individuen als auch der Familie nach sich zogen. Die Moderne hatte viel Kraft auf die räumliche Organisation der industriellen Arbeiterschaft verwandt. Doch die reformerischen Ideen zur Kollektivierung wurden schnell zu Gunsten der bürgerlichen Kernfamilie ad acta gelegt. Derzeit kehren die Fragen unter umgekehrten Zeichen zurück. So schreiben die Kuratoren: „Auch heute befindet sich die Gesellschaft im Umbruch, weil immer mehr Menschen anders als in klassischen Familienstrukturen leben – ob als Paar, Alleinerziehende, Singles oder allein lebende ältere Menschen. Für viele ist das Leben in Gemeinschaft eine vielversprechende Alternative, die soziale Kontakte fördert und Kosten senkt.“ In diesem Zusammenhang ist die Arbeit von Dogma im Rahmen des Projektes „Wohnungsfrage“ am HKW von besonderem Interesse: die Villa Suburbana und die Villa Urbana. Beiden gemeinsam ist, dass sie nicht für eine Kleinfamilie, sondern für je rund 50 Künstler und Künstlerinnen entworfen sind. Grundlage für die Entwürfe sind konzeptionelle Kriterien: ein Finanzierungsmodell, das die Immobilien dem spekulativen Wohnungsmarkt entzieht; Standorte, die für andere Projektentwicklungen nicht relevant sind; eine aus dem Industriebau erprobte Konstruktion, die geringe Kosten garantiert; minimale, optimierte Individualräume und maximale, flexible kollektive Bereiche. Über die Teilnehmer: Pier Vittorio Aureli ist Architekt und Theoretiker. Gemeinsam mit Martino Tattara verfolgt er seit Gründung des Architekturbüros Dogma in der Tradition Aldo Rossis die Entwicklung eines politischen Konzepts zur zeitgenössischen Stadt. In den letzten Jahren hat sich das Büro vor allem mit der Wohnungsfrage jenseits traditioneller Wohn- und Eigentumsverhältnisse auseinandergesetzt. Andreas Ruby ist Architekturkritiker, Publizist und Ausstellungsmacher. Seit 2016 leitet er das S AM in Basel. Gemeinsam mit Ilka Ruby führt er den Verlag Ruby Press. Bereits in der Vergangenheit hat sich Ruby mit der Wohnungsfrage beschäftigt, darunter die Konferenz „Min to Max“ im HAU in Berlin, an der auch Pier Vittorio Aureli teilnahm.
10.5446/54132 (DOI)
You Youoon Etwa Two other The land belongs to everybody. And of course, all we know, we all know, ecologically, if I have a stream and my property, you have a stream and your property, it's the same stream. And so what I do with my property with the stream is get affected, etc. An architecture of good intention does not solve anything. If we do not change our way of looking at the problems, our way of inventing tools, our way of trying to find solutions. So if we do not change our culture, our way of doing, our way of thinking, I do not think that we can find any solution. To approach the problem of social injustice, reproducing model of housing that are model of housing of 50 years ago makes no sense. Not to think what does it mean to live today. And a car can live through life. But without the land and the land being promised, I can not live. I have the land and the land, then, in a moment, with water and air. And I now address the question of society and they ask why the society is silent. Why is this a problem that we are not aware of on a daily basis? I mean, in England, when you had those laws of freehold and leasehold, the leasehold was great. The idea of private property to me is an animal, I said, other than we should have. Good evening. Oh, my God. My dear Anlin, you said already everything. I only heard a few hours ago that this would be in English. And that's how ARCH Plus is working. And I think it's a very interesting thing. Why the question of property is again an issue. The question of property, of course, in the German version was Bodenfragen. I gave a lecture a couple of months ago about the topic. In Paris, you said, question du sol, nobody understood what this means. In German, it's clear. And maybe a second anecdote, which is quite emblematic. I was on a podium for a debate last week in Munich also about the same issue. And there was a quite famous architect. I will not give you the name. He was sitting next to me and he said, I've never thought about this issue. I mean, for me, it's normal. The plot belongs always to someone. So actually, we together with Ahno, we and Thomas and Thomas Meifried, when we started to work on this book in 2013, we approached for the first time this issue, this issue of Bodenfragen. It was here. It was actually the last chapter. So in this book, we tried to establish dialogues always between, let's say, antagonistic fields like city and nature or community and individuality. And one was in chapter seven, it was Boden und Eigentum, where we considered the possibility of the ground, the land as a common God, and to bring it into a dialogue with Eigentum with the property, basically, of course, of the buildings. So this, and then I started for this magazine and also for the Luxembourg Pavilion in Venice, I started to research more into depth on the theme and, of course, the eminent actor and theorist, I would say, the one who developed this, pushed this into the public debate and the political sphere was Hans-Jochten Vogel. You said it already, so it might be a little bit redundant, my talk. And he said this amazing, I mean, it's exactly what I said, considering the land as a common God. So I went then to Bonn to the Friedrich-Ebert Stiftung and there are the archives, and I asked all the documents of 72 and 73 from Hans-Jochten Vogel and the guy came with the paper. It was unbelievable, you know, and I just had one day, but it was absolutely, absolutely passionate to do research on this and what are you doing first of all? Of course, you start as a researcher, you start by a state of the art, you showed it already again, and as an architect, we need always help. I mean, and in this case, I asked Dirk Leur, who is the specialist of this topic, who by the way is a professor in Trier, so it was not that far away, and together we established this timeline. And what is interesting is, of course, this man that you've shown in front of a red background, who is Silvio Gosell, who was actually a liberal social intellectual, and who developed his theory of Freigeld and also of Freiland, free land, I don't know how it is translated into English, and where he talked already, he said already that the land should be a common gut and a common good, and not an object that you can trade with. So this is of course integrated in a longer history, and then you see here somewhere, 73, there was, so Fogli started already in 72, because he was a mayor in Munich already, he has been a mayor in Munich already for 10 years, and he was of course confronted with all the problems that are linked to the land speculation. So I let you read this little longer quote, this is only the first part, when he organized this symposium in Munich in 72, absolutely interesting, and right in the beginning when you read the introduction of this little booklet, the small publication of this symposium, you read the sentence, and right, finished, okay, and this is the second part, and so that was in 72, and I asked also last week what happened since. Almost nothing, nothing, Tom Sievers said exactly the same, nothing. So that was in 72, and we are confronted exactly with the same phenomenon, right? What is absolutely interesting is that in the beginning of the 70s, it was not only Hans-Johan Vogel and his parties, there was this commission within the Social Democratic Party, but also all the parties in Germany discussed this issue. They all wanted a sort of Bodenrechtsreform, but there were two completely different projects behind an analysis that they shared, for instance this quote of Franz Josef Schraus, also interesting. There was, of course, the social demigrants, they wanted to, and they had this commission, and they developed this project of dividing the land which should be only, and that is really important in areas, in urban areas where there is a very strong pressure on the housing market, that in those areas the land should be private, and detached, divided from the building, that still could be, of course, private, and with which you can speculate still with. So the idea was, of course, to mix, to bring, again, more diversity into the cities, to redensify also the cities, and of course to mix it with people who would rent their dwellings, whereas the Conservative Party, the CDU, basically, and it is really interesting, because you can still read, you can very simply download all the debates in the CDU, whereas they wanted to bring most of the people of citizens into property, and it is, of course, the idea that every citizen would be an owner of his own house. So the effect was, of course, the spoil, as you all know. So that was even a so-called Boden Reform Partitag in 1973 in Hannover, and that was, at the end, they worked during several years on this. There was this absolutely amazing debate, also this debate, when you read again a debate in the Bundestag, in the German parliament from 1973, it's so passionate, I mean, it's absolutely funny, it's extremely intelligent, I was a little bit nostalgic, I have to say. But at the end it failed, of course. And so when we got the commission of creating the Luxembourg Pavilion, and together with Andrea Rumpf, we thought first, what can we, because this issue is, of course, very important in Luxembourg, you mentioned it, Anlin, that of the, and that is important, that is an important difference, is that the constructable, the potentially constructable land in Luxembourg is only, there's only 8% of this that is still in public hands. What does it mean for a sustainable urban development, especially as the constitution in Luxembourg is extremely liberal, which is very good, but at the same time, the effect, one of the side effects is that this constructable land is not, basically is not developed. You could develop it, and the question is how. So maybe later on, we will talk about this. So this is the analysis, this is the situation, but at the same time we said to ourselves, I mean, we'll show this effect, but we don't want just to offer a critique, but we also want to offer a project. I mean, the question is, the main question is, how can we deal as architects with it? Of course, it's extremely important to contribute to these kinds of public debates, is to push and d'ercleur, he said, I mean, he is a Bodenrecht's expert, he is a tax specialist professor, and he said, it's amazing that you are architects now, you know, push this debate, we need this debate. And so we can act as political activists, and I think, and that was my last book, was about this position on emancipation, it was about repoliticizing, you said it, and Lin also repoliticizing the discourse on architecture and urban design and urban development. So this is what we did then, we again, first of all, a state of art, and we tried it. Of course, you recognize the design, I hope, which is of course the same design, I mean, it's inspired, let's say, by Charles Jenks, always know his fantastic diagrams, but of course, his history of architecture is completely depolitized. So this was the idea of revisiting the history of architecture, and to say, okay, independently of the situation, whether the land is public or private, especially when it is private, how can we, as an architect, in our basic competence, which is planning and designing architecture and doing urban design, can we offer most parts or maximum of the land to public users. So we developed those five topologies, you can see it here, there are, for us, five topologies, maybe you know more, and of course it's not an exhaustive history of architecture, it's a beginning, I would say, a starting point, and we want all of you to contribute to this. So those five topologies, the first one is you reduce the footprint of the building. It's, for example, the famous Seagram building in New York, is one example of Mies van der Rohe, and of course you all know that he negotiated with his client, I mean, who lost, by the way, a lot of money once he put the object, like, 30 meters in distance to the street. So this would be the first topology, the second is, of course, an open block structure. We are here in Berlin, I mean, thinking about the super blocks, but also in Berlin, the Venetaplatz of Josef Paul Kleijer could be one example. The courtyard is open for public users. The third one is very popular in France in the 50s and in the 60s is the so-called, in France, urbanisme de dalle. So there is an artificial, I would say, layer, like the plateau urbanism, there's an artificial layer which is open for public use, and then the fourth is maybe the most, I would say, the less clear, and we can also discuss it because it's how architects might integrate, are they able to integrate also sometimes public users in the building. So this is, of course, one of the projects that influenced us the most. I say us, my generation, when we were students, it was, of course, the Jussiux, the Der Bibliothèque of Reim Kolas, and that was, of course, an whole idea of his to continue the public sphere into the building. I mean, here we basically talk about public buildings. And the fifth is, of course, the most radical typology, and of course it's not for us a theory that we say, okay, the whole city should be like this, of course, but it's just a statement, and the fifth typology is when the building, the object is detached from the ground, and the ground, the land is physically and symbolically free. I mean, open for public uses. This is, of course, we think about Le Corbusier and the building on the pilli-tea. So this is a sort of catalogue, and this will be an ongoing diagram, and I don't want to tell you much more, but we show projects from the history of architecture. For example, this is an absolutely amazing project designed by Kevin Roach, and Kevin Roach was, of course, this famous American corporate architect, and what did he do? Because he was here at this client, the Federal Reserve Bank of New York. They wanted to realize him to design 80,000 square meters, and at the same time they said, well, but we would like to offer a public plaza to the city. So he said, how can I do? I mean, of course, you know, there is this legislation in New York, and it was just impossible. And at one point he raised the object 50 meters in the air and offered this whole plot to public uses, unfortunately, and it has not been realized. There was building a tower that got just empty next to it, and then I moved it. So this will be the pavilion. We will show projects from the history of architecture, and then we show experimentations from the very young university, and I don't want to say more. I think that's it. So come to see us, and I would be very glad. Thank you very much. So good evening, everyone. My name is Maria, as Aline said, so following up on my contribution to the property issue, I will very briefly introduce one historical case I've been researching on as a part of my doctoral thesis. This is the case of social ownership, a form of property relations that came to characterize the space of socialist Yugoslavia. In recent years, the architecture of socialist Yugoslavia has received a lot of scholarly attention. When we say Yugoslavia, images come to mind. Brutalist housing blocks, forgotten futurist monuments in the middle of nowhere, modernist ruins. At the same time, political and economic project behind Yugoslav architecture, including its property relations, and modes of financialization, remains largely invisible. But how do we tell these hidden stories without architectural object as our only disciplinary compass? Socialist Yugoslavia, as we know it, refers to the period that started in 1948 with the famous Tito Stalin split when the country was expelled from the Eastern Vlok. This uneasy position of not being part of the West, but now not being part of the East any longer, in the middle of the Cold War, created geopolitical anxiety that urged the Communist Party leadership to reinvent its ideology. During the 50s, a specific version of socialism based on the principles of workers' self-management and social ownership was introduced. The rupture did not happen only on the institutional level. Before all, the change was biopolitical. It was framed around the construction of a new political subject, namely Yugoslav self-manager. Very broadly, workers' self-management, positioned somewhere between Soviet model of central planning and the capitalist model of free market, was a form of horizontal political system governed by a network of workers' councils. In practice, self-management, or samuupravdenie in Serbo-Croatian, meant that the workers make their own economic plans, distribute apartments and nominate their own leadership. The workers' councils multiplied was imagined as a hack that would abolish the state gradually. At the center of self-management, next to the workers' councils as political units, were the institutions of social ownership and socialist enterprise. Socialist enterprise, to put it simply, was a type of a company in which workers were permitted to take or reinvest the surplus normally given to the owners. Workers in the socialist enterprise could, as an example, invest their profit into bigger salaries, new housing construction or vacation plans. This created a state of so-called non-ownership, where those who produce at the moment both govern the company and harvest its profits. But where nobody has the right to sell the company itself, on the level of housing the same. Socialist enterprises slowly emerged as a new kind of investors, significantly influencing the development and construction of newly built infrastructure. Housing represents perhaps one of the clearest cross-sections for understanding socialist ownership in practice. Together, enterprises and the communes, units of territorial organizations that manage the land, formed housing funds that financed the construction of the socially owned housing. Occupancy rights emerged as a specific type of rent for these apartments, radically diminishing the tendency towards home ownership. Suddenly, more people were renting instead of buying housing. And while socially owned housing wasn't the only type of real estate in Yugoslavia, it co-existed with the private property, it was legally prioritized because the system would push for it. Also, housing was not the only form of infrastructure that was socially owned. Among other infrastructural projects, numerous tourist facilities on the Adriatic coast were often financed and constructed in a similar way, and were available to the workers of the companies that invested into them. Socially owned apartments were built and maintained by socially owned construction companies. These companies, which strongly incorporated elements of market economy, offered their services more often through the competition than through direct contract, thus allowing for monopolies to emerge. During the late 60s and early 70s, the markets were already dominated by the big players, while many of the small companies went bankrupt. This led to new laws aiming at further decentralization that, on the contrary, made the whole system less flexible and more bureaucratic. When the housing crisis hit in the 80s, construction companies were already unable to provide housing cheap and fast enough, opening the grey zone in which informal settlements started to proliferate. Vague definition of social ownership created a legal loophole for manipulations and massive privatization that followed. Looking from the perspective of 2018, could it be said that socialist ownership in Yugoslavia failed? What happened to the promise of the system that aimed at complete horizontality? Critics are various, from being too bureaucratic, top down, too much of a theory and too little of a practice, too capitalist. Social ownership as a political idea offered new framework for more equal planning of the built environment. However, the gap between architects and politicians remained. Bureaucracy was replaced by technocracy, a layer of experts who dominated the planning process without recognizing the specificity of their own historical moment and all the roles them as architects could play in it. Now the question moves on to us. What can we learn from the case of socialist Yugoslavia? Economists, political theorists and legal scholars wrote about it. We read about it. But what comes after? Who has the expertise to talk about property? Or rather, could the discussion on property challenge the whole paradigm of an architect expert and instead push us into the zone of fluidities? Of being an architect and a politician, architect and a lawyer, architect and economist, architect and feminist. Is it about looking for the past models to be copied today or instead making a turn from models and objects towards infrastructural or systemic ways of thinking? Infrastructural, not only on the level of the built, but also on the level of understanding how invisible mechanisms such as property labor, forms of governing and distribution, operating behind what we in the end see as architecture, are the very architectural matter itself. Thank you so much. Architecting means designing and guiding a plan or an undertaking. So it's not only about buildings. It's much more than that. My point is that we architects could architect basically everything. The most important things that you can architect could be policies. And as you know, governments work with policies. So basically by architecting policies, you architect governance. I start to ask one or a few questions because two points I thought are very kind of interesting. And really I'd like to talk with you about it. 72 Florian said with the last time something important happened about property or politically relevant. The second thing is why is Patrick Schumacher so important and everybody talks about him? Because I think the two things have something to do with it. Because I think and one is an example who's not withdrawing and tries to have this kind of new stories. And I think Patrick Schumacher has better stories than the left because I was sitting with him on a panel and in the end everybody listens to him. And the left always says, yes, but we have to think more complex. So is there a lack of saying things simple in architecture and in politics? Yeah, it's working. I don't know. It seems so from what you're saying. But I think the question is somewhere else a little bit. I think we have to step out a little bit of the discussion about architecture. Only architecture or architecture the way we understood it until now. And then we think we talk about politics and I would say also about new economic model and a new way of organizing society. So it's very, I know it sounds very ambitious, but for me, the way I look at architecture and the role of the architect is actually, the architect is someone who helps to organize society. That is our role because we organize space. And why are architects or could architects be better politicians? Yeah, but they don't try. I don't think they could become whatever they want, but they don't try. But we are all not very happy with politicians at the moment and you are about to find a party. Yes. And you're an architect. Yes. But let me tell you something else. I don't think it's a problem of architects. I think professionals in general, they stay away from politics. Actually, let's say people we admire, we like stay away from politics because politics is dirty and they don't want to lower themselves. And I can give you a very good example and that's in Romania of all places. In 2016, Romania had a technocratic government. So what does that mean? That means that people who have never been into politics, professionals, they joined forces and they, they've been functioning in a government for one year. It was one year break in Romania and that worked really well. And that's made people see that a country can be run in a different way. And politics, of course, we are not officially, as I said, we are all professionals, but we were doing politics, of course. And they've seen that politics can be done in a different way. And what way? Sorry? What way? Well, first of all, we were all there with a very clear purpose and that was the common interest, the public interest. This is what we were doing. So we are serving the country and serving the people, which in Romania is quite special because politicians, they behave as if they deserve the position in which they are. So they treat the others, let's say, as a lower caste. So it's actually, it's very ugly how things go. And we were competent, many of us coming from the private sector, just fighting the system, basically. Because of course, I'm only talking about ministers and secretary of state. These kind of people, because the rest of the administration was just the old one. So we had to fight the system within the system. Yeah, because I think it's also, because that's why we founded a television on the ETH with Arno. It's a problem of communication, you know? It's a problem of language. It's a problem to compress things or it's a problem that people really listen. Because things are very complex and how do you get people interested in what you are talking about. And that's also the Patrick Schumacher thing. He's so harsh in his things and he's so entertaining in a way. Because you just sit there with an open mouth. So how do we, I mean it's all there, you know? The data, the history, we are on the right side. But how do we transmit it? That's, I think, the main thing. Well, I see... No, sorry. Did you ask me? Yeah, yes, yes. I see at least two ways. First of all, for instance, in our case in 2016, we were quite bad, I think, in communicating what we were doing. But as we were doing things, we were really genuine. We made news all the time. I'll just give you one example. For instance, one piece of news. The Prime Minister crossed the street on food. He had a delegation and instead of taking the limousines and all these things, he just went on food. Second piece of news, the Prime Minister of Romania travelled by plane at economy class. Wow. Just doing things. But the other way is using actually people such as architects, I would say, to communicate what do I mean with that. We can imagine the future, no? We are good at that. This is what we do. So let's imagine it. We can use to begin with images. An image speaks as much as a thousand words. And hopefully we learn how to talk as good as Patrick Schumacher, but then maybe about other topics. And, well, that should work. So I think your project is for me a very, very, very interesting one and very useful, something that I want to look deep into and learn from it. Johanna, I just would like to say one word. I really admire your project. I mean, the ambition is great and I wish you good luck for this project. I don't, no, no, there's no, but I don't think that, probably even don't think that more architects should become politicians. Or at least, I don't think that the majority of architects should become politicians because then eventually probably politicians become architects and then this would be a mess. What I think, what I really believe is that there should be more, I would say brilliant architects there where the decision of the production of space are taken. And I think there should be much more brilliant, the best architects should go in the administration, for example, where the policy of the production of space is actually dealt. I think this is rather important. But is it really a question about the best? I think if we want to change things, then we cannot go back to the technocratic stand and try to, we have to try to find other ways of democratizing the planning process, isn't it? So is it going into politics of founding a new party? How would you say is different than to what Florian has said that architects have the tool to work with the property issue? But is it enough? And how do we change the democratic system from within? So that's something I would like to add, not having a brilliant architect, maybe Schumacher is a brilliant architect, I don't know. But that doesn't solve the problem, does it? My point is not that we should all become politicians, not at all. My point is that we should, if you want, break free, our profession, get a little bit out of it, actually even not getting out of it, because I've been saying this for some years now. Architecture is not only about buildings. There are two definitions of the word, and the second one is about designing and guiding, and what I was saying in the trailer, a plan or an undertaking. So we have skills that we could use, put it to use, and I'm wondering why do we limit ourselves to that? And of course we should have architects like better ones in administration, we should have some politicians. But my point is precisely that we could, with our way of thinking, and designing and imagining, we could change politics, because that's at least my aim, I'm not planning to be like the others. And I see it really as a project, but instead of looking at a plot of land, we look at the nation, we look at the country, and you think all long term. But maybe it's also about what you said, it's important to tell the story. It's not only the best architect to go into politics, it's also the way we talk about it. So we are sitting here, we know the Bodenfrage, we know exactly what we are talking about, but I took the taxi here to here, and I tried to explain the cab driver, what we are going to discuss today, and he simply, he didn't understand it, he didn't know what the Bodenfrage should be, because he said it's quite clear, land belongs to a private person. So I think it's also that we have to look back and that we find good models or concepts and update it. It's like what you looked into this concept of social ownership, and maybe how we could update it for today. Yeah, I would also say that I really believe there is no one ideal, let's say, ground of struggle, but different kind of knowledge requires different kind of struggle in a way. So in that sense, let's say the thing I really concretely struggled with quite a lot, I have this historical case, I research about it, now the question becomes, so which kind of knowledge does it create? What do I do with it? And then there is a moment of being stuck, am I ego-nostalgic, or is nostalgia also something that can be productive? Like is it something that I kind of sit today and look into the past and kind of cry over the times that I actually wasn't even born in? Or do I see this kind of knowledge as a legacy that can be used somehow today when the situation is completely different? And then you have, I don't know, like socialist Yugoslavia fell apart, now you have this kind of moment of state going directly into the partnership with Abu Dhabi companies selling the very valuable land in Belgrade, and this kind of creates, I would say, historical knowledge doesn't become something that's like a bubble that stays there in 50s and 60s, but it becomes actually like a properly active tool to unlock the present moment, and actually future moment. So in that sense, I think it's... But the thing is there was one word which killed Fogel's approach, and it was enteignung. They said enteignung, and the law was dead. No, he said that in the interview. But what's enteignung in English? Ex-propriation. Ex-propriation. This is like the word functions, you know? I mean, there is a... And that's why Fogel in the end kind of, because he couldn't tell his story. He was very good in lawmaking. He was very good in writing long texts and things like that. And that's what I'm interested in, also, Warner's approach is that there is this thing that she says, architects have the better stories for the future. Architects have the better images, and you know, something you can get maybe emotional or something about it, and it's beyond the data and history, and we know all these things. Here we know them. So how do we... Now, actually, the idea of Nutzungseigentum and Verfügungsseigentum, that, of course, this project was not feasible. And that, of course, the expropriation is part of it. But there were two other aspects in this land reform, which was... Wow, how do you say it in English? Bodenwertzuwachssteuer and Planungswertabgabe. But that doesn't sound very sexy. I tried to describe it, because it's really important, because Bodenwertzuwachssteuer is when a field, and that happened everywhere, like in Bavaria, for example, very prominent, when a field, agricultural field, is transformed into a buildable or constructable land, and all of a sudden, the value rises like it's incredible. So they wanted, of course, and even CSU discussed about 80% of this Bodenwertzuwachssteuer. There was the first one, and the second one was the Planungswertabgabe, which means basically... I mean, imagine that there's this rule when you build one... for one square meter of housing surface, you need one square meter, and that is, of course, the public hands, has to produce one square meter of infrastructures. So they wanted that this housing, that this value should be, of course, contribute to the construction or to the financialization of infrastructures. And those were the two basics aspects, and the Liberal Party in the beginning of the 70s, so the partner within the government with the social democrats, were not the Liberal Party that we are facing today, but after the Freiburgatessen, they were extremely... They said even in the Bundestag during the debate that the land should be very strong on the Sozialkarenehmen. So it should be really what is in the German constitution, and this is absent, I think, in the Luxembourgish constitution, the Sozialkite of the property, that property has a social responsibility. And the Liberal Party wanted to implement those two aspects, but of course, Vogel, who by the way during this fantastic debate in the parliament, it was not his best day, actually, and he wanted, of course, the project was to go further. And that was not feasible with the Liberals, and that's it. But what I say, what the whole critique coming from the Conservative Party was, if, and I think this is still an aspect that we should discuss today, if because today we say, okay, for example in Basel, they are not selling public land anymore, okay? So what happened here in Berlin? I mean, if we believe Michel Möniger, who states always that after the fall of the war, 50% of the constructable land belonged to the municipality. And they sold it out, of course, especially after the banking crisis, which was before the global bank crisis, so it was in 2002, they sold the land. So now there's, I think they stopped with this, but so the question is, if in those areas of high-pression, the land belongs to the municipality, you give a lot of power to the municipality. And that's why I think it's really important that the people who manage this power, that they are, well, let's say, I don't know, brilliant or whatever. That is also an issue. But is it really just about storytelling? I mean, if we look, for example, to the 68th generation, they invented new way of communication, of protests, of living, which appeared to a certain class and majority. So what's our own approach today in order to get these messages along? I think it's not only about storytelling, but the stories will win in the end. Because you can, I mean, I didn't understand what Florian was really talking about at the moment, because I'm not an architect and I'm not a specialist in these things. And I vaguely remember what the difference from the Liberals was to the Liberals today. But the thing is that I'm totally for complexity, and you're talking about complexity, and we all have to understand it, and we have to understand the history, but then we have to kind of think of a way to engage other people. And this is why I like Waner's approach so much, because she goes out of the bubble, she goes out of this discussion and tries to get more people involved, and not only in Romania, as I understand, but in Europe. But this you can only do if you compress it to a simple argument. And if I hear these words from Johann Jürgen Vogel, I get afraid. Baug is that something? It's like the German police chef had this brilliant idea of an internet of the police, and he called it Raster Fahndung, and everybody thought, oh my God, what's that? It's how do you sell it to the people, isn't it? We are working a lot with love, but with words, and with several philosophers. I really think philosophers should get involved with politics a lot. And what we are doing, we just start from the words that we are all using, our common vocabulary, and we look for synonyms, and then we look for the variant of the word that is the most popular one. And we are using that, and it's quite strange, at least for me. For instance, we found the translation in Romania of competent. We have that word in Romanian, competent, but people in the countryside don't use it. So we found another one, and we are using that one, and for me it's strange to use that word, but I'm using it. So this is just a very small example, but I think actually nothing will change. We can have the best architects in the best positions and the best professionals. If we don't, all of us, change a little bit. I think it's time for a bit of a shift in our mindset, and this whole discussion of property, for instance, I would link it with something else, and that is the fact that we are with too many people on this planet, and our ecological footprint is too big, and it all has to do with the fact that we believe that the planet is ours, and that we as a species, we are superior. And I think the moment that we give up on that, and I'm very fascinated by all kinds of stories about animals and birds and insects, which are so intelligent creatures, you see. And then when you realize that maybe the planet is not ours, maybe we are just one of the species, and when you draw that pyramid, you know, the very famous human being on top, and no, when you change that, and it's just a group of creatures, and the human being is one of them, then your relationship with land will be totally different. And there are examples in the history of, there are aboriginals, for instance, who have a totally different relationship with land, they respect it very much, and they don't damage it. But the interesting thing is we have a piece by Naomi Klein in the issue, talking exactly about this problem, that when the Native Americans were fighting for their rights on property, nobody listened. But when they started to build up connections to ecologists and the ecological movement, then they had a totally new set of argument to bring in diverse stakeholders, and then were able to fight in a larger community. So that's what I see in your project, to see politics as a project again, not as an execution of legislation, but as a project which can be designed. Exactly. But actually this is how I ended up talking about getting a little bit outside of our bubble, because with our office we are researching for quite some time, I would say like 10 years, we exist for 10 years, this idea of land use. And to the point that we realize that for ecosystems to survive, you need a certain amount, you need a critical mass of land that is not used by people. And these days I think there was an article, some people, some specialists even claim that half of the planet should be actually free of humans, so human free space, that means of course you might cross it, but you do it really carefully and you stay away from it. So that means no infrastructure, no agriculture, no tourism, you just leave it free, and that's the only way that the planet can survive. So we are very much busy with that, but we realize that, okay, if this is what we care about, if we really care about the way land is used and about how you create density and quality in density, because they come together, if you want to keep land free, you have to densify. You have to densify cores of cities, of villages, and you have to do it well, because you cannot just extrude, you can't be able to close, you have to respect privacy, you have to find almost all the qualities that people find in suburbs today in their free villa. And we realize that we as architects, we always come in too late when the decisions are made, when the decision to build somewhere, a wrong decision, has been taken and who took it? It's always very high up there. And this is then, we said, okay, but let's try to do something, and my associate is the Flemish government architect. He's working for the government of Flanders in Belgium, but he's independent. He's not political, but he has quite something to say. He has a word to say in this whole debate, and his agenda is really this, because Flanders has a very big problem of urban sprawl and with huge consequences and huge costs. And that's another thing. It's only the moment when people in Flanders learned how much money from the public budget goes into, is spent because of, is lost because of urban sprawl, when they realized that, that we are talking about five billion euros every year that is lost because of traffic jams and because of loss of heat through the skin of individual houses. When they learned how much money they spent on that, they became, I mean, the level of awareness was all of a sudden higher. So money is, I'm sorry to say that, but it's also a way to make people sensitive to raise awareness. You just talk about money, it hurts them. Yeah, which is what I like about your thing is that you don't withdraw, but you kind of go into the process and accelerate it. And on the other hand, saying we want to be in government is not actionism or is not local politics and not denomestrating, but it's saying we want to do something and we take the responsibility for it. It's not romantic or folkloristic. Like you said, the 68 movement, they did, were very political. And I mean, at the time they were okay, but repeating that is very romantic, because all this actionism didn't lead to anything. I mean, look at... We can ask the later. But just one question. What's the... Yeah? Nicholas. I speak not very well English, but the problem is we are debating something. What's very important and interesting at the time between 1919 and 1924. You don't mention it. It was a debate and it was a debate between architects. Debate and a debate on the socialism and socialization of the economy. And if we remember the houses of Bruno Tauro, the settlements, then this was a background for him or for the other architect. So in this time, 1990, they built settlements by the state on state land and with the economy, with the state-owned firms or with state firms. I don't know. So, and the curious thing is that my father built one of the first settlements in Potsdam-Wabelsberg for the city, very economic, and so on and in other cities you find the same projects. So at the end, 24, the society and the economy in the capitalistic way was reconstructed. And this was the starting point of the Frank-Eston. What is going on between 1990 and 24? It was a bright end. You must mention it in the timeline. It's a very hard debate and it was wonderful. Over the socialization of society and the projects, that was not only a theoretical debate, it was a practical one. This was my father's job. So the question is how to come from the political perspective into the practice, maybe, just to pick it up. No, of architecture, as he just mentioned, just to continue. I mean, what's the programme of your party anyway, which is going to be published tomorrow? No, the programme is not going to be published tomorrow. No, we are going to launch the party with the manifesto. We're going to make the making of the political programme is also something special and a project in itself. So it's not yet... We want to debate about it, we want to discuss about it, and we want to be very transparent about it. So the upcoming months will be very interesting. I want to mention one very important thing. It all sounds very nice, but there will be elections, and that will be the test. If the story, if the message goes to people. So it might be... I'm smiling now, I'm very optimistic, I really believe in it, but it might be that nobody votes for us, so we'll see if we have success or not. But we give it a try. Shall we open it? Yeah, yeah. Yes. Questions? Maybe one more to Nicolas. Nicolas, no, you're absolutely right, we say it in the magazine. There were, I think, basically three rakes. The first one was the one you described, and also I would like to say a word on... It was actually prepared a little bit before, but you should all read... I discovered this book lately, and it's absolutely fantastic. One of the best books I've ever read is Rudolf Eberstadt's Handbuch zum Wohnungswesen und Wohnungsfrage. And where he describes, he tells the whole history of urban development via the land question, land property question. It's really interesting. And so this was absolutely important, this debate, and then implemented by the corporatives and so on and so on. Right, then the second one was after World War II. We haven't mentioned Tans Bernoulli today, who was a very important Swiss, who was also a professor in Zurich, and he was also a politician, I mean, in the Swiss parliament. And by the way, a practitioner here in Berlin, and he developed an open Silvio Gizelle's theory of Freiland. He wrote this fantastic book, Die Stadt und ihr Boden. So this was also a debate after World War II, and today I think we are hopefully in the beginning that exactly the state should take its responsibility. We were talking about droit à la ville, the right to the city, droit au logement, the right to the housing, to dwellings. So it has to be thought together with the land property issue, I think. But it's an open debate. I mean, the spectrum is quite large, you know, from this kind of neoliberal attitude, Patrick Schumacher, towards what happened in the Soviet Union, where everything has been communalized. The spectrum is very large, and it should be an open debate. But I think it's also, like in Wuhan, in this case, it was the protest of the civil society, you know, that brought you into politics, actually, and that made this case of land grabbing so present all over Europe. So I think it's also like the question of the engagement of the civil society, you know, and it's not only the discussion amongst us, and actually no one is really, now it's starting. Of course, the Wohnungsfrag ist always a topic in Berlin, but it's really like no one really talks about it. We are doing now this issue, of course, but Anna was also in a discussion with Katrin Lompfer, and she said it's very difficult to change something, Erbaurecht is a nice concept, but how can we implement it? And it's something that we have to, like, it's a demand, no? We have to... I think that's why we will continue with the theme via the practice of the comments, because it's a way of practical implementation of politics by doing. So that's, I think, the only way to change things without a politician saying it's not possible to change. You have to do it on your own by coping with the problems in nuclear in groups and trying to find a new way of doing politics, a new way of dealing with property and so on. So I think if you don't have urgent issues left, no? Questions? I've already asked if there are any comments, questions, no? Yes. Perhaps just one comment. I was astonished that you tried to construct a Wii under architects for... Perhaps you mentioned that there is a wide range and there are architects from neoliberalism up to socialist. And so I think it's less a question of architects, but a question of social movements, as here in Berlin started again. And yes, finally, I think often architects are middle class. And the question of classes might be talked in such discussions as well. Yes, just as an addition. Thank you. Okay. No? Good. So then I would like to end the evening with thanking you. Oana Bogdan, Krasovarot, Olaf Kravitz, Maria Maric and Florian Hedwig. Good luck for your projects. I think the most important thing is, which we can take away today, is that we should think about politics as a project again, which is something we have to engage in, we have to work with and not just leaving it to someone else. So... And that's not us against them, but on different levels of society to work with these problems and issues. Thank you very much. Thank you also to Arno, Branagh, Olaf and Alina Mora, who has been working with us on the issue. Contributing editors Florian Hedwig, Andrea Rumpf, but also Stefan Krueber from Pittsburgh, Robert Thum from Trier and Haraj Trapp from London. I really would like to thank our partner, Marie and Marie, especially Charlotte O'Casse and Marie-Louise Greb, who not only make it possible, but also making it beautiful. So thank you for the work. Thank you also to Hyman Schwanthus for the visual identity of the Atlas of Commonings, our partners Etihad Syrich, IFA and of course the Embassy of Luxembourg for their contribution, Ziedle and Don Pracht for their continuous support of our work, Veronica, Google and Hakawe team for hosting us, our media partner, Architectur Clips. And of course without the entire plus team, this would not have been possible. So thank you very much, Nora, Dünzer, Christine Rüb, Mirko Gatti, Max Kaldenhof, it's quite a long list, but I have really to mention them all. They worked tirelessly during last month, Alexander Stum, Alexandra Nehmer, Lauren Seidel, who did a really good job in the end, helping me to push it through, Niels Frühling, Angelika Hinterbrandtner, Yuki Akpina. And of course Sophie Lover and Fiona Shipwright, who are now working on the English version. And I would like to invite you to the event in Venice Friday, May 25th, so save the date and be with us in Venice for our next round with other guests. And a dance performance, so another way of dealing with the land. So thank you.
THE PROPERTY ISSUE ARCH+ features 73 mit Oana Bogdan, Olaf Grawert, Florian Hertweck, Marija Marić und Christopher Roth Haus der Kulturen der Welt, Berlin, 22. März 2018 ---- Zugang zu Grund und Boden steuert die Raumproduktion und die gesellschaftliche Ordnung. Anlässlich des Erscheinens der Ausgabe 231 "The Property Issue" – gemeinsam mit station.plus der ETH Zürich – diskutiert ARCH+ mit Oana Bogdan, Olaf Grawert, Florian Hertweck, Marija Marić und Christopher Roth die gesellschaftlichen und politischen Perspektiven für eine andere Architekturpraxis. FLORIAN HERTWECK präsentiert mit "The Architecture of the Common Ground" eine Architekturgeschichte der Bodenfrage und gewährt einen ersten Einblick in seine Arbeit für den Luxemburger Pavillon auf der diesjährigen Architekturbiennale in Venedig, den er gemeinsam mit Andrea Rumpf kuratiert. MARIJA MARIC erklärt den oft zitierten ‚Dritten Weg‘ im ehemaligen Jugoslawien und veranschaulicht im Gespräch mit OLAF GRAWERT, wie gesellschaftliche Eigentumsformen jenseits privater und staatlicher Organisation funktionieren können. OANA BOGDAN zeigt anhand der besonders eklatanten Situation in der rumänischen Gemeinde Roșia Montană, mit welchen Methoden des Land Grabbing internationale Konzerne in Allianz mit Regierungsstellen agieren und entwickelt im Gespräch mit CHRISTOPHER ROTH eine politische Handlungsperspektive für die Architektur. --- OANA BOGDAN ist Architektin, Stadtplanerin und Mitbegründerin des Architekturbüros Bogdan & Van Broeck in Brüssel. Sie war Staatssekretärin des rumänischen Kulturministeriums (2016–17), wo sie vor allem für Fragen des Kulturerbes zuständig war, und Mitglied des Royal Committee for Monuments and Landscapes der flämischen Region (2013–15). Derzeit ist sie Mitglied des Advisory Committee for Architecture and Design Flanderns (seit 2015) und des UNESCO-Expertenkommittees der Stadt Brügge. OLAF GRAWERT studierte Architektur und Städteplanung in Innsbruck und Berlin, mit Schwerpunkten auf Architekturtheorie und Stadtforschung. Er war u. a. Gastlehrender und Vortragender am Joint Master of Architecture Suisse, an der Accademia di Architettura Mendrisio und der Chicago Architecture Biennial. Er ist Co-Redakteur der Plattform WIA (whatisarchitecture.cc) und arbeitet seit 2015 mit Brandlhuber+. Seit 2017 ist er wissenschaftlicher Mitarbeiter am Departement Architektur der ETH Zürich. FLORIAN HERTWECK ist Architekt, Professor für Architektur und Direktor des Masterstudiengangs Architecture, European Urbanisation, Globalisation an der Universität Luxembourg. 2018 kuratiert er gemeinsam mit Andrea Rumpf, Direktorin des Luxembourg Center for Architecture, den Luxemburger Pavillon der Architekturbiennale in Venedig, der ebenfalls die Bodenfrage zum Gegenstand hat. MARIJA MARIC ist Architektin und Wissenschaftlerin in Zürich. Derzeit promoviert sie am Institut für Geschichte und Theorie der Architektur (gta) der ETH Zürich, wo sie auch lehrt. Sie forscht zu Eigentumsverhältnissen und Architektur im Kontext des sozialistischen und post-sozialistischen Jugoslawiens. CHRISTOPHER ROTH ist Künstler und Filmregisseur. Sein Film "Baader" wurde 2002 im Wettbewerb der Berlinale mit dem Alfred-Bauer-Preis ausgezeichnet. In den letzten Jahren entstanden mit Armen Avanessian "Hyperstition" (2015) sowie mit Arno Brandlhuber "Legislating Architecture" (2016) und "The Property Drama" (2017). Im Moment arbeitet er an drei thematischen Fernsehsendern, gemeinsam mit Studio Brandlhuber der ETH Zürich (station.plus), den Kunst-Werken (realty-v) und der Haubrok Foundation (42).
10.5446/55281 (DOI)
So, just in sense of introductions, so we're both from Inge Conscious Studios. I'm, my name's Adam Jezowar. I'm the creative director and founder of ICS. I've been building websites for over 12 years. Started this agency sort of as a solo designer developer. I've kind of been on both sides of the aisle. We've now grown into a nine-person team. And before starting ICS, I was an in-house designer at various startups and agencies throughout Boston. So, Kevin, do you want to give it quick? Yeah, sure. I feel like half the room knows me. I'm Kevin Brooks. I've been a developer for, yeah, about 13 years now. I feel like I tried to be the unicorn. Some people have heard the term unicorn. We'll talk about it here. But I've served as designer, as front-end, as backing, all across the board development at various points in my career. But I remember starting out with a four-person development team, working in higher education very early in the web, before we had visual designers to design websites. So I remember what the days of not having designers mucked things up was like. And now I've come to love them. So that's what we do. In our play today, I will be playing the designer. I will be playing the development. So at ICS, we believe very much in open communication. We're a small team, so it perhaps makes it easier for us. But we really try to keep everything as transparent and open across departments as possible. We also really believe in collaboration. We believe that, especially across departments, collaboration, that's how we all get to do our best work. So can't we all get along? So why do we put this talk together? So for too long, we feel there's infraction between design and development. We feel this is based on fundamental misunderstandings between the two. Between the two disciplines, how they work and what we're trying to achieve. So there's misconceptions that designers tend to be perceived as artists or worse yet creatives who are only interested in expressing ourselves and creating pixel perfect designs and that we have little to no interest in actually how things get created and how things get built. And on the development side, developers kind of have more of that reputation of maybe being challenging to work with, don't really care about how things look or feel, maybe just want to find the quickest solution, maybe they're kind of stubborn. So the truth is that the truth seems to be somewhere kind of in the middle. And the truth is that we both kind of are after the same thing. So what we like to do is change the dynamic and build relationships based on empathy, understanding and trust. So for a while, there's been an argument that designers need to learn how to code. And this is becoming an increasingly challenging task. So roles are fragmenting. There's no longer just designer or developer. There's front end dev, there's full stack, there's a UI designer, UX developer. It's becoming increasingly challenging for a designer to really, to master those skills. And Cameron Moll, who's CEO of Authentic Jobs, quoted saying the argument that designers must code and excel at both disciplines is becoming an increasingly daunting task. So what we feel is that while designers will benefit from having that basic understanding of code, particularly HTML, CSS, and developers should learn to develop an eye for design, we don't have to master each other's disciplines. We simply need to learn how to understand and respect what each other brings to the table and how this shared collaboration elevates our work beyond what anyone else could do on our own. And so unicorns need not apply. So as Kevin was saying, the unicorn, that's that one person who's magically a full stack dev, a UI UX designer, kind of crushes it on all levels. So maybe you know some, maybe some of you are that person. If so, that's great. But it's a rare occurrence. I have yet to ever meet one in the wild. And it's not really something that you could build your team around. So again, it's not about one person mastering all skills. It's about bringing those skills together to collaborate. So design loves development and vice versa. So design and development are complementary disciplines that support and enhance each other's efforts. And we're all aligned towards the same goals. We want to solve our clients problems. We want to improve our users' experience. And we want to create our best work. And we can't do it alone. Okay? So we both share a love for creative problem solving. So design excels at creating elegant visual solutions to solve our client or user challenges. Development excels at engineering efficient solutions to solve those problems. It's really two halves of the same coin. So if you think about these two, we all know who we're looking at, right? So both tremendously talented. But what would you rather listen to? A Wings album or Beals album? So Paul needed John's sardonic wit to tame his sort of saccharine tendencies. John needed Paul's whimsy and melodic skills to elevate his work together. They're one of the best songwriting teams in the history of modern music. Sure they created some decent stuff on their own, but really when they were together, that's when the magic happens. So communication is key. So how do we start to understand and empathize and collaborate better? So it really starts with communication and the open lines of communication throughout and all stages of a project. So development, first off, development must be at the table throughout discovery, strategy, and planning sessions, particularly when clients are present. Too often project managers, product leaders, maybe creative directors, they go to those meetings and they kind of strategize. Then a whole bunch of decisions are made and then development just sort of gets handed those decisions. So we want to change that. We strive at ICS to not do that. We really want development at the table throughout helping to make those decisions. And we want to get developers out of the basement. So that means if you're not invited to those meetings, talk to your project lead, say, I need to be at that meeting. Have at least a representative from your team go to that meeting because not only is it going to be for that technical perspective, sure, but it's also design is beyond just how the pixels look on the screen. So you're actually designing solutions. You need to look at it from a holistic angle. So bringing development to the table ultimately just leads to better design solutions. So developers need to be reviewing the design and UX artifacts that often designers are creating. So those could be wire frames, site maps, down to design comps. We encourage internal reviews often. So we use tools like Hangouts, Invision, and Slack. Those are great tools that kind of allow, especially when you have remote teams. A lot of us maybe have remote teams. But having that sort of internal review often will sometimes read something two, three times before we even bring it to a first round for the client. And that's not just among the design team, but project management's in on that and development's in on that. And so we're all kind of looking at things together and ensuring that it's the right solution for our client's problem. So when developers are communicating with designers, you want to educate them. Educate us. Don't assume that we know. Don't just say, can't be done. And I've been in that situation. Don't just say that. Explain why. So the designers will then bring that knowledge with them to their next solution. The next solution will be stronger. And it'll just sort of kind of move you along that path. So I'm going to let Kevin jump in on this one. Yeah. So I kind of want to go through a real-world example. We do try to have everybody at the table as much as possible. We want development and design, all communicating, all being part of that same conversation, because we are after the best product for our clients in the end, not even just necessarily the easiest thing for me as a developer to build. So this is a basic wireframe. It looks pretty normal. Nothing too crazy going on there. We've got a logo and some navigation, some text, overlayed over an image. Not too complicated to build. Not too complicated to design either. Excuse me. So this is what it looked like once it was designed. Which again, the designer did a great job. It fits within the specs of the wireframe. There's nothing really moved. There's nothing really crazy. The development team is going to go, hey, wait a minute. I built to that. And now you designed this. And that's an apple. And that's an orange. And now we've got to go and redo work. Really nothing like that going on here. But maybe there is. This is where the handoff might have happened. And now you really still need to have a conversation. Everybody still needs to be talking to each other. As a front end developer, I might get this and I might say to myself, how can I do this as efficiently as possible? So how could I build that as efficiently as possible? Send me a JPEG. That's all I need, right? I've got some text laid over an image. Just send me a JPEG. That's probably the easiest thing to do. I just need that image. Now as a client focused team member, that really sucks. It's easy on the budget. Just have the client send me the image. They're just going to make 20 images and they're going to email them to me. And we'll load them in and the world will go on spinning and everybody will be happy. The problem with that is an image like this requires that the client is going to have to go in and use Photoshop or some other image manipulation tool. It doesn't really take advantage of the technology that we have available to us. So let's dig in. What's really going on here? What is the designer actually doing? And what should I be seeing as a front-end dev looking at that Photoshop file? And I should be noticing that that's a black and white image with this yellow overlay that's using Photoshop's multiply filter. Multiply for probably about 12 of the last 13 years of my life as a developer was like the work of the devil because there was absolutely no way to do that in anything but an Adobe product, right? But now today, can we do that with CSS? Yeah. Yes, we can. We absolutely can. Ah, sort of. There's a sort of thumb sticking out there and it sort of is shocking that even in this day and age, this effect works in everything except for Internet Explorer, even Edge, even the newest version, even Microsoft Edge. You cannot use background blend mode and have it work. Instead of getting this nice overlay with these rich inky blacks coming through and that nice colorization that you want, you'll just get the black and white image. You won't even get any of the color coming through. So where does that leave us? Send me a JPEG. Is that still the right answer? Is that what we're left with now because Internet Explorer, not cursing Microsoft into Microsoft building, but come on. And you know, at that point, you might say, well, all right, now we've really got to find a different solution. Does the budget support that? Maybe, maybe not. Should it? Probably if we're going to build the best that we can for our client. So you know, at that point, it could be easy as a developer to go back and say, well, look, just send me the JPEG. I looked at a solution and it doesn't work somewhere. I've checked off all my boxes. I'm done. No, just send me the JPEG. Or we could actually have a conversation. We could actually talk to each other and say, how can we make this work? Is there some sort of alternate design that we can do for those other browsers that maybe don't support the features that we can take advantage of? And then how do we present that? What's the right technical solution? We could use conditional tags. They're a pain to manage. They're just a pain to work with in a lot of different ways. Or we could use, again, new CSS. And we could use at supports. This is sort of like media queries for features that the browser supports. We could actually write in our CSS files, do you support this feature? And handy as this is to tell if a browser can support this feature, Internet Explorer doesn't even support this feature to check if it supports features. But that's actually OK. In this case, that's more of a benefit because it'll just ignore whatever declaration you have, just like it would ignore a media query or anything else. So you can start to build your normal, every browser code and then progressively enhance, which is sort of what we want to do online. So it allows us to develop for all of the browsers. And when Internet Explorer and Edge catch up, they'll just look great, finally, again. So we work with design. We say, well, what is our alternative? What's our alternative? OK, well, that's not those rich and key blacks we were really looking for. But it gets the intent across. It gets the idea across that it's got this color key to it. You're still seeing the image. It's a solution arrived at through talking it out, through understanding what the roadblocks are, understanding what the technical features allow us to do, and then working around the things that maybe don't necessarily work. Can we make it better? Now that we have that solution in place, now that we're not relying on a static image, can we extend that? You bet. Yes, we can. So now that we've got that color separated from the image itself, the client's just uploading a black and white image, now why don't we say to ourselves, OK, well, why not allow the client to control that color? Suddenly the CMS can do that. Suddenly we can just ask for a color field. And the client can then come back and say, great, I can do whatever I want. This is fantastic. Maybe that's a little bit too much power in the client's hands. Yeah, maybe there's some vocal opposition to that. But then that's OK. We can sort of come back and say, well, we'll keep that color palette within the brand guidelines, things that we've already defined during discovery. So at that point, we're using everybody on the team to really arrive at a solution. And maybe it's just a header, but it's a header now that is really empowering the user to really control that, to really manage their site in the best way that they possibly can. Clients win, we win. That's really kind of how we look at things. When we all communicate together, when the designer communicates the aesthetic intent of what they want things to look like, how they want them to behave, when the front end developers sort of act as that bridge to say, these are the features, this is how we can do that. And the back end developers come in and support that with fields and everything the back end needs to do. That's when we build a successful site. That's when we don't have to ask the client to go and pay the Adobe tax and have to run Photoshop to manage their website. It's why people don't pay to run, try to remember what it was even, the old Adobe Fireworks, not Fireworks. Cold-fusioning. Cold-fusioning. All those proprietary systems that we're trying to break away from. This is just another element of breaking away from those proprietary systems. Now there is a caveat to all this. So talked about a couple of different CSS solutions. Some of them are a little bit newer than others. When you're working with teams, especially when you're working with different teams on different projects, there's a real difference between the technical abilities and the technical skills that you have available to you. And that's why part of conversing, part of communicating with each other is understanding what each member of your team can actually do. That JPEG solution might be the best solution that that developer can offer you. Sadly. They might not know that app supports exist, or that background blend mode exists, or that all of these other things that are going to make the site better for the client. They might not know that those are available to them. So really, go into a project. Don't assume that everybody on the team can do exactly what you've seen on another website or what you've done with another developer. That's really where communication comes in as well as understanding those things. And we talked about it as well. That JPEG solution might be really all the budget can support. Budgets matter. Unfortunately, sometimes they matter. All of this collaboration we're talking about, it really does require more time with people on your team to be at those meetings, to be part of those conversations. And you really got to step back and say, is this worth it? And ultimately, I think it is. Ultimately it's going to help you build a better product at the end. But it does have to be something that you include, whether it's story points or whether it's the actual dollar amount at the end of the day. You've got to build in the ability for those team members to take that time out of their shackles in the basement or their dreamings of better designs. And actually sit down at the table and be part of that conversation. Even after that handoff, even after that wire frame went from A to B and turned into a design, that's really where communication still needs to happen and still needs to be budgeted for. So it's a little bit nontraditional in that way. So we're talking a lot about designers and developers talking. Project managers should be part of that conversation too. So let's talk about some of the tools and where it goes. Cool. Thanks, Kevin. So one thing that we really try to do is break things down into manageable, smaller chunks, especially larger projects. So you might be in a small A Agile, a big A Agile, more of a waterfall shop, maybe some kind of hybrid approach. Regardless of that, you want to try to find ways to break these cycles of design and development down into sprints, whether they're formal sprints or not. So these kind of short focus sprints allow design and dev to collaborate more readily and it fosters this kind of shared responsibility. So that's design and development. Let's sketch out a solution. Let's quickly get that into the browser. Let's test it. Let's validate it. It's not working great. Let's together sit down and just decide on a solution. This is so much better than the typical design, sitting down, headphones on, creating something for two weeks and then saying, here, development, make this. At that point, clients probably already signed off on it. That's the steps you're taking regardless if it's going to work or not. So we feel like those kind of sprints, breaking things down, really kind of fosters that. So don't wait for the cake to be fully baked before you want to take a peek at that recipe. You want to be able to get in there, get into the kitchen and get your hands dirty. So let's look at maybe what we would call the old way of a workflow. So we've got, there's obviously more people that would be on the team, but let's just take for our sake, designer and developer, moving through kind of key stages of a project. So a lot of times, and I've been in this experience through agency life, et cetera, where it's like design is kind of present through those, what we'll call it research and strategy and those kind of three components of design, the UX, the IA and the UI. And then there's sort of this, there's the handoff, okay? And then a big bunch of PSDs, hopefully they're labeled, layered, are dropped into a developer's lap and then that developer has to decipher and figure it out and make it happen. And at this point, again, things have, you know, the designer's going to get in a sandwich and it's all in the developer's lap to decipher and create this. So what we've moved towards is this sort of more commingled workflow. So you'll see we've got design and development, they're at the table throughout all of research and strategy and discovery. And it waxes and wanes. So then we move into design. Clearly, the designer steps up the role there, creates those artifacts, creates the wireframes, et cetera. But development is like on board. They're being kept in the loop, whether, you know, they're reviewing, they're at those meetings, et cetera. They're catching things. You know, and we're, as designers, we're actively bringing it to development and saying, what do you think of this? How does this solution work based on the specs we've laid out, et cetera? Obviously roles kind of flop, flip-flop. We get to development, QA design kind of comes back in and then, you know, so it, but it's this, it's just sort of natural ebb and flow between the two. And it basically just fosters cross-pollinating across this whole process. And that's what we do at our shop and we really feel like it just basically brings better, better results. So we have a more real handoff here. So by the time we've kind of reached the development portion, we've probably already tested a few things out. We've got some prototypes. We know where we're going. So, you know, once development is really hitting the gas and the development stage, there's, there's really no big surprises and that's really what we want to try to eliminate. So prototyping, you might be doing this, but prototyping is a great way. It also encourages design dev to work together, test things out, it's kind of an iterate on them. It's kind of part of that, trying to break things into sprints. Style guides, if you're not using those, are just a great way for design to communicate with development. A lot of times these can be kind of built together. A lot of times design might be working on, you know, maybe they're doing style tiles or they're doing more of a proper design comp. And then what we want to try to do is quickly take that and distill that into the core elements of the UI. We could then bring that to development where we probably move past that wireframe stage. So development can then start to take, you know, the approved wireframes, style guide and start actually assembling things in the browser while design is maybe still working out other content types, other pages, other areas of the site. So if we work in sort of a hybrid waterfall, we're not an agile shop, I've actually seen it referenced as a sashimi waterfall, but it's essentially, it's like if you could imagine layers on top, right? So we still kind of move through those traditional stages, but we try to overlap them as much as possible. So we find that, again, this is sort of a great way. And of course, the style guide is going to ensure consistency across UI elements. And then there's modular systems. If you've heard of Brad Frost's atomic design, it's probably deeper than we can really get into right now, but he kind of uses this like atomic model to break a web product down into like its basic core elements already down to like a, you know, text field or like a button and things like that. So this is just another way that that kind of more, it just fosters a more iterative design process. And then sometimes you need clarification of intent. And I'm going to now pass this back over to Captain. Yeah. Yeah. I mean, sometimes you run all of those processes and I feel like going back to the balance of design and development, I mean, that's almost like 3D. If you want to extrude that and sort of see that they're not happening distinctly, you know, that design doesn't just stop and development just starts, that they're happening in tandem. You know, if you're trying to do atomic designing, you're trying to break things down into smaller sprints or anything like that. Those do foster those types of communication that you need to really understand. And even if you're not, you know, when you get something from a designer, sometimes you look at it, even if it's a well-labeled, well-structured photoshop document illustrator, you know, whatever you work in, Sketch. That doesn't make sense. Sometimes it does. Sometimes it does make sense or sometimes you're assuming things that maybe ought not to be assumed. You just wrapped up a site for you in Kibaya Works and this is kind of what the homepage, part of the homepage ended up looking like. It had this really great, big image as a background, text overlay. And then you open up this nav and it's like this full panel nav and that's great. It's a great way to kind of get into the site where, you know, maybe there isn't as much navigation as there could be. It's a great way to just sort of dive in and give it a little bit more of a polished presentation. So, you know, I get this file and I'm looking at it and it's like, that's great. That's beautiful. All right, I can build that. I know how to build that. I'm going to hit that button there and that's going to open. Everything's going to be wonderful. I say, well, okay, you've got this background here. So it's kind of blurring what's going on there and it's locked into the PSD. There's just this background image that's blurred. Well, what happens on the other pages? Yeah, just use the same background image. So you want to use the actual pages header to provide some consistency there. Is that even possible? We might have video up there. I think that might be my favorite question as a developer. Is that even possible? And you know, looking at that, I thought, I don't know. Anything is possible to pay for it. That's if the budget is... Right. Yeah, if it's a rich budget, then absolutely. We'll say yes to almost anything, but technically that's a really fun exercise. Is that possible? Can you blur a video and have that not kill the client's computer whoever's looking at the site? I wasn't sure how that would actually work. And that's where we build prototypes. Does this work? What's the right solution? How do we figure out if that works or not? And so we tested a bunch of different options outside of the CMS because that's a lot easier to just sort of tool around in. And ultimately, yes, it is possible. So this is what the careers page looks like. It has a full screen video at the top and you can open that navigation. The video actually keeps playing and is blurred in the background and there's really no loss of performance because the browsers have optimized how all this works. Yeah, that was my response too. I tried probably four or five different ways of doing it. I said, well, I'm just going to blur the whole damn body. And it worked. And it didn't kill performance in anything that I could find. Even running Internet Explorer, I think 10 in a virtual machine on my Mac, it didn't even really choke that. I was like, all right, well, this seems production ready. All right, let's roll with it. And I guess the point of that is before acting, make sure you understand what your teammates are trying to do. Come to that conversation. If you're a designer, it's really, really easy to say, well, you know what, I want to move the title over here. And I'm going to move the subtitle over here. That's fantastic. It's really easy to do in Photoshop, right? Just move all those elements around. And then you hand that off to your developer, development team. And one of them looks at it and has a heart attack because they just built everything that was wire framed out and you broke that. And now they've got to go and redo work. So head that off. I talked about budget earlier. You have to budget for these conversations. But if you can get ahead of those things and prevent people from having to redo things, it's probably either break even or net gain on some levels, I think. And for developers too, it's really easy to sit back and say, it's not what was in the wire frame. That's not what was approved in the wire frame. And that's not what I built to. And now you're changing it. You need to go back. It's really easy to do that. It's a really terrible thing to do sometimes. Sometimes budget totally. You can go back and say, oh, the money's not there. We're not doing that. But for the most part, you want to be collaborative with your team members. You want to sit back and say, well, why did you do that? What are you trying to do with that? How can I arrive at a satisfactory development solution to achieve what you're trying to do? The way the web is now, there's so many opportunities for subtle interactivity, for animation, and things that don't necessarily have to be storyboarded and wire framed and specced out that you can do in development to say, what does that hover state look like? What does that transition state look like? Those are the places where we really can collaborate and really just add a little bit more icing onto these products. And being part of those discussions is where that sort of stuff can happen. I think it speaks to how you think of those different project phases. So we'll create wire frames, we try to get approval on those, but I think we allow a certain amount of fluidity to them as well. So we're not going to say, hey, that's not how the wire frame was. We're going to allow those kind of surprises and things that were inspired once we get into development to happen. And I think that also only happens when we're communicating. Yeah. There are definitely opportunities for a wire frame to be handed off, and you can go out and start building custom content types and building your fields and building everything that you need to support that design. But while that's happening and the designers are still working, it's not hard for a designer to come and say, hey, we want to change that. Does that really throw you off? Have you started building that yet? If you have, how much change is that? And really to sort of suss those things out. So with that, let's go make some kick-ass work together. There you go. Boom. There it is. All right. So we can keep this going on as a conversation. I don't want to keep you from learning more about the goat from the god of zoats. Let's try to. No, I think we definitely have time for a little conversation here if people want to ask questions. Yeah. I'll say the biggest thing I tend to run into, and of course my experience is more of that design, does there a thing, hands it off, and then it's all me. Yeah. The biggest problem I see with that is the designers don't think about dynamic content. They put three boxes together that each have two lines, but that's not realistic. In one fixed screen. Yeah. So having that process of having both the designers and developers working together at the same time, does that help that a lot then? It does to a degree. I think there's a certain element of insisting that the designer use either some real world content if it's available. That's the best. And it's usually the one that gets the most pushback because it's going to break that pristine design. But that's usually the best way to go about it, or to use variable length content and make sure that whoever is actually generating the content knows that there's at least some limit to right toward or to have in mind when they're creating that content. But you're right. It's hard when you get that design file and everything is exactly as it should be. And you've got a row of three elements and they're all exactly the same height. Isn't that the worst when it's like a title's in there and then they throw in real content and it's just all over the place now. It's a pumpkin mouth of... It's tricky to try to remember to make sure the designer does that. Sometimes it doesn't, for us, it doesn't get to QA until we realize QA because QA throws everything in there that they can't. And then it breaks. Yeah, I mean, I think having that conversation up front can definitely help them help designers to be a little bit more open to trying those things, knowing that you will face that plate that they're going to hand you sometimes makes them a little more empathetic. But sometimes you just have to insist. This is a library of test content. See how all of that works. And design to those really extreme edge cases to make sure that the design supports them. And hopefully you get the opportunity to see those designs and flag that. And before it's really handed to you for official, it gets signed off on you, then you can challenge that designer. And then hopefully the next time he or she is like, right, I remember it's not always going to be this exact level of content. And then I'm wondering, with that different process of the developer being more integrated at the beginning, you find that takes more time away from the developer from other projects or does it all kind of work out in the end because they're not trying to figure things out later? Yes and no. I mean, I think at a perfect world, Kevin would kind of attend all of our meetings. It's like doesn't always work out that way because maybe you're trying to crush on another deadline. But then at the end of the day, it should kind of come out in the wash, right? So he's got less backtracking he has to do and things like that. Yeah, I think it does come out. I mean, I've noticed that there's definitely a time savings in a lot of cases where I don't have to step back and say, well, you know, you changed this or you handed this off. But maybe that's not as good a solution as doing it another way. The stuff that would have come up earlier and probably would have saved everybody some time. That being said, I can remember being a contractor, being a contract developer coming in. That makes it even more difficult to say I want to be at those upfront meetings if who you're working with maybe doesn't subscribe to that or doesn't see the value in adding those hours to your budget when the budget's already pretty contained and pretty finite. So it's not the easiest thing in the world to do by any stretch of the imagination. But I think I've definitely noticed a difference in projects where I'm part of those conversations and projects where I'm not. And the ones where I am, things tend to go more smoothly. I think even like your ramp up time, right? So because we're small and we have to kind of move between projects a lot. So when you're at present at those meetings or even if you can't be at the meetings, like we'll use Basecamp, so we'll put meeting notes in. So even if you can't be at the meeting, you're probably going back in and just seeing where we are, right? So when it's time for you to really ramp up, you're not going from first gear to fifth. You're already kind of like, so I think it's easier to transition and get up to speed and then if you have to move to something else, it keeps everything kind of moving at a nice pace versus a dead and then, whoa! Also, I think that's really reasonable for certain key things like the developers have to do a wireframe review. There are certain things like that where it's just like, you know it's going to save your time in the ad. So it's a really good review to argue for the developers to spend time doing that. And Chris can speak to that one to me. It doesn't take them a huge amount of time to look at the wireframes and roll their eyes. There's always that question, like where is that thing right there? What is that supposed to be? What does it do? Where is it coming from? We don't have that piece of information on any of the stuff that we've been talking about building, how do we get there? Those are the things that you notice from that side. Right? I was looking at the modified timeline there and looking at the different widths of designer versus developer and wondering, as I look at that, that's a fairly complex resource allocation problem if you're trying to run a company and you've got a certain number of designer assets and developer assets. Do you have any tips or tricks or tools that you use to make sure that you're doing the most efficient job you can of using the people you have, keeping them relatively busy, but still managing to pull off that shift of intensity? Yeah. I mean, Amanda, our project manager, could probably speak better. To that, she manages that better. I mean, I think a big part of it is we meet beginning of every week, Monday morning. I mean, everybody in the shop is around the table and what are you working on? What products do we have for the week? What deadlines do we have for the week? What's on your plate? What's on your plate? And really trying to make sure that there isn't anybody at the start of that week who's feeling like they can't get everything done they need to that week. And that there's somebody at the other end of the table that's going, I'm going to get golf today. This is great. We want to make sure that the balance is there between everybody who's at the table and obviously balance that between developers and designers as their roles are fit to do. I think that's the biggest part of it is as a team, we're always sort of communicating with each other. I'm communicating with our other developer throughout the week. What are you working on today? How are things going? Do you need help with anything? Or vice versa, hey, you got some time. I really need this to spill onto your plate. It's really just an ongoing conversation. I wish I could say that there was a tool out there and I think we've tried some tools that rely on Gantt charts and things like that and they often end up being, you know, you're spending more time managing that than actually getting things done. So I haven't found the magic bullet for that yet. To be honest with you, we use like a Google sheet that kind of shows what everyone's working on. What we try to do, we're actually trying to, high tech guys. One thing we do try to think about is sort of like complexity levels of projects and thinking about, okay, so a designer right now maybe has three or four projects on their plate. One of them is very intensive and that's going to be half their week and the other two are like, because of the phase they're in are much lower intensity. They just might need a few hours throughout the week. So we try to balance that same with you. Like you probably have one primary project that you are just in the weeds on, but you know, there's enough breathing room in your schedule that you can also keep an eye on those other two to three that we also have in the, it becomes just a, you know, plate spinning act, you know, sometimes slips, but yeah, that's kind of how we try to approach that. Yeah, I think a lot of it is being aware that, you know, you're not going to be able to necessarily just have your head in the weeds on one single project, you know, for multiple days, like that's just a rare occurrence, right? Yeah. You know, you're going to have your blocks of time that you know you need to carve out, but you know, it's sort of a habit thing. You've got to, you know, maybe later in the day, that's usually how I do it, three or four. But just to bed for now, I need to go check in on how this project, this project and this project are doing, just so that I'm not a roadblock to those conversations that people aren't waiting for me to answer questions so that everybody else can keep moving. So it's just trying to be empathetic toward other people on the team to not be that one that everybody's going, what's Kevin doing? You know, you know, where's he at? So, yeah. If you have any tips or experiences on different tools for collaboration, not over, you can always be in the same office on Monday morning. That's true. Okay, how do you work, especially with your clients as well as the developers and the designers? What works? Well, I mean, so our office is in Boston. I work out of my home in Maine. So I'm lucky enough and close enough that I can actually drive down on Mondays and be on the team for one day a week and be in person. But the rest of the week is all remote for me. And even though the rest of the team is in the office, we use Slack. Slack is a great tool. I was tremendous. I feel like it really changed our inter-office communication. Yeah, it really, yeah, and I'll admit I was resistant to it at the start. I don't want another communication platform to worry about. You've got email, you've got base camp, you've got Google chat and Instant Messenger, you know, all these other tools we were trying to use. And it's really centralized a lot of that communication out of these, you know, trial and error sort of systems that we were going for. And if there's a history, there's a history. You can go back. We would have had a conversation. I can scroll back and see right that's what we talked about. Yeah. We don't allow our clients into Slack. Some shops do. We don't. That's an internal tool. Yeah. We use base camp, which, you know, isn't perfect, but we've been sort of sticking by it. But you can have your own, you can have different channels in Slack. So you could have a client. You could. We haven't done it. Because I think part of it is I think the concern on my end was I didn't want it to blur the lines of the barriers between, and our clients just suddenly pop in and just. Yeah, we're working through that right now. Yeah. It feels almost like. I've heard stories. Yeah. Yeah. I feel like it's similar to like letting a client text you. Like there's just certain avenues of communication that don't work well, you know, as well as others, I would say. And I think base camp, base camp for the most part is what we use for file sharing and, you know, more email type communications, scheduling. And then we have a piece in Google Hangouts for video chat and. It's okay. It's all right. It's all right. And it's awesome. What is that? I just just did the other day. Yeah. It was good. I had. I feel like I've never used one that I didn't have some sort of. So what about well, Slack gives you audio calls. Slack not 50 video. Not 50 video. No. Yeah. Yeah. Yeah. Yeah. Right. It's a proprietary. It's got a bit of a price tag to it. But when it were, yeah, I mean, when I've used it with other clients, it's like, all right, it's more solid. This works. It works better than some of the other paid solutions out there. I will say that. And then if it come to it, we Skype. You know, we Skype as well. Skype also does give you for freebies Skype does give you the chat. It does. That's what we use. Yeah. Yeah. Yeah. Skype for whatever reason. No money, but low quality. Yeah. Right. Right. Skype is always for me, how to barrier with clients who say, I don't have a Skype account. Yeah. And that's, and they usually they'll come in. Well, yeah, I have a Gmail address. And so the barrier to entry is usually a little bit lower with Hangouts, but, but Skype is fine when we get to use it. We have a couple of dedicated hangouts, really easy to set up a dedicated Hangout with a couple of internal Hangouts. So it'd be like, hey, meet me over in ICS-1. And then we're just, and then with Slack, you can really easily just sort of push that over. And then we can all be kind of in a meeting within a couple of minutes. Yeah. Yeah. Yeah. Yeah. Yeah. I missed like the first two or three minutes. So I didn't quite get. So what you're doing, are you only doing front end development or you also want to like do some other back end? So we, and maybe I'll get tired and feathered for saying this, don't do a lot of clone work in our shop. We actually do a lot of WordPress work in our shop. I tend. We have done a lot of work. We have done a lot of work. We have done a lot of work. We have done a lot of work. Yep. Yeah. We have done a lot of work. And clone is fine. I'm just asking because like when you showed your simplified diagram with the designer and developer, I thought, well, actually the way you describe it, the designers like makes like, they're stuff to Photoshop and then hands it off. That's not how I would see design with the person. Like, or on the other hand, the developer is not a single person in that case. It doesn't mean, for me, a developer could be somebody who knows how to implement all the business logic with the data that I see. Yeah. And that person doesn't need to know anything for like, not more than the basics about CSS. For me, you could also say that could also be the job script of the designer. Like, I don't know, the traditional way is like the designer builds their stuff in Photoshop. You could also say designer is somebody, like, it just depends on what gets handed over. So you could also say, if the designer just hands over a prototype in HTML and CSS because that's what they know. That would also mean the conversation that you described is still just as valid as necessary. Yeah. So I told you, but so yeah. Maybe they mean web developer when they say developer. And when we use the term developer, we tend to mean, yeah. Down a little bit deeper in the process. Yeah. Yeah. And I think, you know, it's a little bit simplified just for the purposes of the talk, but you're absolutely right. I mean, you know, in our shop, there's myself and another developer. I end up doing more of the structure, the code, the foundation, less of the CSS, less of the front end stuff, more of like server management and stuff like that. But I'm still going to be at those conversations because that's my job. But there's some elements there that's more appropriate for our front end developer to know. So it is a little bit simplified just for the purposes of the talk, but you're absolutely right. There are some levels of development that it just doesn't really apply to. Yeah. So I think that's the key with the statement, you know, you need to apply because nobody can know everything. But in that case, it looked like, okay, in this case, the developer would be the unicorn, the developer would need to know the hard core back end stuff, like in my worldview, plus like CSS and all this and the rest. That's right. Which, that kind of blurs a design line, absolutely. There are a lot of designers who would be handing off HTML, CSS. Yeah. Yeah. Can you show the slide again with the designer and developer working together? Yeah. The pumpkin one. The pumpkin dude there. Yeah, that one right there. Yeah. Yeah. So I mean, it's definitely simplified, but. Yeah. There's usually other people involved. We just, for the sake of this, distilled it down to those two roles. Right. Exactly. Exactly. You don't need like journals and fixtures and fairs. Even though they exist. It's true. That's right. That's right. So which one are you? I'm the dog. You're the dog. I would have made that choice. Yeah. Yeah. Absolutely the dog. Yeah. I'll be handed the cat role and I'll be. Who's the cat and who's the dog? I'm apparently the cat. We just decided that. We have, we have, we have. I'm the dog. I'm definitely the dog. I'm a, yeah. I'm a desirer by. Yeah. Origination. I would go with that. Yeah. Awesome. This is great. Thank you. You're very welcome. Thank you. Thank you.
How designers and developers can get along. For years, developers and designers have had a rocky relationship. Lack of communication and understanding between both departments means details get lost in the shuffle, and often the intent of the design is lost. Frustration ensues. The work suffers. It doesn’t have to be this way! Adam & Kevin from ICS will show you how design and development can work together, in harmony, to create amazing websites and digital products.
10.5446/55282 (DOI)
Okay, then let's start. It's filling up here slowly. After lunch I think it is, everyone has a bed beddy and no rain for this, but it's quite easy talk. It's about that we used clone 5 as a framework, as an early adoption project, some projects that are bigger and we not just the small side, and we used a lot of add-ons and techniques we laughed in clone 4. So yes, it's a story. This is Johannes Ragham from Graz, now living for some time in the States. I'm from Graz, I'm doing clone development since around I think, my first clone site I deployed in 2005 or so, and I'm contributing more, especially since I met Jens and I'm doing an internship at his company and since then I got more involved in the clone community and doing core contributions more or less since 2008 I guess, or 2009. And yeah, in the meantime I'm staying in Minneapolis until the end of this year because my wife is at the university there and I'm enjoying the States right now. As an I'm Jens Klein, I'm doing clone since version 1, earlys, and that's I think 2002 or something like that. In between I did some different projects, but now I'm back to the core development again and yeah, I'm also living in Austria and in Innsbruck, and those are projects are from this area. Yeah, and the talk is that we, the clone 5 helped us to meet our requirements, also exceed sometimes the requirements we had with our projects, this was great, but due to the early adoption we had to face a lot of problems and find solutions for these problems and we tried always to bring this back upstream to the clone core. And at the end we have now from our point of view a clone as a framework thing that's far closer to that what we want to be it. Yeah, the clone 5.0 release as always with this dot.0 releases was not really ready in all corners and it's a pure CMS level, it was okay, but there was the bugs left and it was not ready as a framework and so anyway it was important to get it out of the door so people can start using it and get reporting back so that was okay to release it. But as a framework there was a lot of work to do. But still we decided to use clone 5 for our projects because of the much of all the improvements which are in clone 5 like the handling of resources and what else. I think it was the right decision in the end because it gives us a path for the future migrations. And also it was much easier to, the new theme was a big advantage, everything's responsive from the beginning and so this was the real, yeah, it was really much easier to start with clone 5 than with 4 at this point. It was a point where 5 was just released to decide what to use and so at this point it was like do we save things with clone 4 or do we take clone 5? That was the question. And now we have here, we picked out three examples, projects, different ones. First was a more or less consulting project without much programming for me, I just trained the staff there. It was a series of workshops, it was only a few changes we made to the core but I did not really write code for their project, they did it all on their own, for the training thing. And the requirements, this is Porsche Informatic and Salzburg is an IT company from Volkswagen and they had to rebuild the size or rethink their sites for their car dealers, there's 800 car dealers worldwide. And it's also like enterprise corporate identity, you have to have content building blocks to use and so on. So it's really content management but on the other hand there were a lot of services inside of this. And yeah, this was the project and characteristics was a small team, they managed it in about one, ten months. This project meant frontend developer, two internal developers and one external developer plus me as consultant to do this project with clone. So that means that I didn't work full time on this project. It was for them also, they have to do a lot of day to day work additional and so I managed to do the Schalzler project but it was the case that they had an existing ZOP2 solution already so they knew ZOP already but it was really pure ZOP2 from starting in 2005 or something like that. And the idea was to reuse the knowledge that's in there because they don't have to explain certain things to somebody in ZOP and start with clone but not the project's code. In fact they choose clone because they were looking for standard CMS and it's kind of shoot out of CMS systems and clone was like, okay already we have the knowledge, it's an enterprise level CMS that has the security and we can modify it as we like and use it as a framework for us. It was a base idea. And with me then there was the idea to use Mosaic, better up-blown up blocks and up-tiles and so on and layouts to do the whole thing so they had to learn portlets and fuelets and stuff that just had to write tiles that was really easy because if you only have to write tiles it's much easier than if you have to teach all the stuff we have with portlets and all the concepts, the thousands of concepts we have. The other project we're talking about is about the Architektur Stiftung Österreich. It's translated as the architectural forum of Austria. It's a society for architectural people which are interested in architecture and each of the Austrian countries have one department of this society and all of those need a website and they already had one based on one custom built PHP-based platform. And we got this project based on our proposal with building it in blown with a line edge, based on a line edge setup with 11 sites. And the core requirements were that they were unsatisfied with the oil solution which was hard to use. They wanted a really easy to use CMS system and images were very important to them. They already used some images but not in a way which fits the modern style of websites and the requirements they had. The architecture people needed to show a lot of images on the site. Of course they needed a mobile friendly website. And also we took the opportunity to rewrite the JavaScript from blown up image cropping at this point, took a different library and also polished up the bugs we had in blown scale and so on. So that was what we did on the core and really important modules to really bring this back. It was a big project and we touched a lot of add-ons also. We are not blown five ready and made some. It's also a big thing for people just to take their add-ons and bring them on the project. And the characteristics of this project were that they had, as I said, 11 sites deployed all over Austria and we had 11 more or less clients talking to us, more or less. It was one, there was one client side project manager which collected more or less the change requests from the clients and so on. This was the main communication partner for us but still we had to talk with 11 individual clients. And we set it up like one project manager on our side and two technical people, Jan, sent me and the design company from Goats. Yeah. This was the project. The interesting thing is that we didn't want to go agile with this project because of so many people around and getting the, if you then asked 11 people plus the persons behind and do it agile, it's really difficult. But in fact, then blown was, this blown we were able to do some changes late in the project stage anyway. And at some point it was needed that you can do this way with blown. So it's kind of mixed project. Just started classical more or less and then go agile as an accident. And another project we did with Peter Holzer sitting in the corner, I was there, was for the Swiss Bankers Association. It was a project that, yeah, there was a design agency involved and Peter did the main work with technical concepts, implementation, consulting. And the interesting thing is the requirements were that has to be secure, modern system, has to be responsive. And the main thing is that in Switzerland you always have at least three languages. So what's your, I think five languages involved at this point, four languages, okay. And then not any, not every document is then translated like PDF is only available in one or two languages. And we had to find a way to tell a collection to find all German documents. But if there's no German document, show me the English document. And there we developed two add-on products for this project that are available to everybody that is this Plon app multilingual indexes, long name. And Plon app multilingual indexes deals with the idea of fallback indexes. So you can define fallback languages and so on. I think Peter has a short demo about it. I'm not sure if it's coming to this, but it's really useful. Another add-on we developed for this project is collective lingua text. So you can have text in one language, but they are translated. So we did some changes also upstream that is possible so that then you can translate them in the control panel to text. So you can, so in fact you have to take it in one language, but it's then the output is then is a translated one at the end. It's also very useful. Yeah. Our involvement in open source is the enthusiastic about open source development. It's in the core of our hearts. We love the collaboration with other people and we believe that this also fits in a good business model. For example, clone is a strong CMS system because many people are working on it and many people are fixing bugs and we try to give back to the community as much as possible and because we believe if we make an improvement in the find a bug or something and of course we want to get it upstream so that not only our people can profit from it, but other people maybe find other problems with a bug fix we had with the improvement we had done and fix it. Yeah. Yeah. Yeah. I think the thing why clone is so great at this point is that it's the component of community driven open source so not company driven open source so you have really a wide range of people supporting clone and at this point this is a big advantage for the customers. It's a big advantage for the companies using it but it's more difficult to make a business out of it. So you're primary can't sell it as a product. It's my experience but you can sell the service around it, that's the thing. And we prefer usually the community driven thing about this company driven stuff that is also called open sourcing. You want to show some of the pull requests? We have prepared a collection of pull requests we have made for our three projects we have talked before and all in all we made I think around 200 pull requests for these three projects and actually we underestimated a bit how much had to be done for using a.0 clone release. But it was I think, yeah, let's look at some of them like portal tabs. The portal tabs got now they have the review state, the workflow state as a class in the engine tag of the portal tabs so you can actually make them read if it's in a private state or so. And we found it very useful and actually I was wondering that it wasn't already the case but now it is. Resource registry is a big, yeah, a big head edge of us always. So the new resource registry process was not really finished how to work with it if you are an add-on developer and even it's not at the moment but it's getting better. And there was some corners, this is dirty corners and a clone where you don't want to look into and there was some scripts like the scripts that generates a grant file was in fact just something that was, yeah, that was not really the code you want to show anybody and just worked somehow it was really interesting. And then, yeah, then this all this bundle compile stuff was not really finished and legacy support was missing and if you know start doing things like, yeah, clone image cropping, bring it to clone 5 at some new JavaScript and nothing works. So it takes two days to write it for clone image cropping and it took me three days to fix all the stuff around. So that's the thing about it, this early adaption really does not make fun, right? So but it's stuff, it goes back now to the core and there's even some work to do and there are some pull requests from Nate and I think open and, yeah. One thing we fixed with the resource registry was if you had a legacy bundle and in this legacy bundle JavaScript library which integrated require.js in a way then you could not send this legacy bundle in the development mode because then you got this require.js error. I forgot the exact error message but now you are able to send it in the development mode. This one request is still not merged because there are some work left to do. And the toolbar, we did some cleanup of the toolbar code and purify it a bit. One of these pull requests, the major one which is for a better styling of the toolbar is also not merged yet. But the problem with the current toolbars as we released it with clone 5 that in fact wasn't finished from styling point. So what we did was just to say, okay, what's the minimum we need for our customer to not say that's crap and so I did it. But I think there's work left to do because if you look at how our customer is working with it, it's not really a thing for discussion, maybe a whole sorts of sprint to make this in one way that it works and not have as now three ways that are different and doesn't work. At the moment we have a toolbar, we can put it at the top, we can put it at the left, we can do everything with the toolbar but we can't use it really. It's much better than the green bar, okay, yes, but yeah, there's work left to do. And then we did rework the related items which are in mockup. It's complete rewrite more or less. Now you can decide if you use the browse or search mode via the user interface before it was a configuration option. And the other thing is you can upload content via the related items which are. Yeah, we think it's better than before. And yeah. And so also, I think related items which if you have used relations or references in the moment, it's still not enough. So it's still, if it's cluttered with content and you click into it and you don't know why, how it's popped up and everything moves around in the screen. So there's still work left but it's now it's much better than before. But anyway, I think the whole concept needs to be re-synced at some point. So that's also one of the corners of the phone that not very perfect at the moment. But if you only want to link to reference three or four items, okay, if it's more, it's really difficult. The other pattern we improved was the structure pattern, the folder contents pattern. It began with a bullet case from someone else from New Zealand, I guess, to make it more flexible so that you can use it for any type of content, not only folder listings. And other improvements went into using the face that it looked a bit nicer and, yeah, actually better. Then TinyMCE got some improvements. For example, in mock-up we are using TinyMCE repository from someone outside of the core TinyMCE community who builds it with all the language translation files. And I got commit rights to this repository and documented the way how this TinyMCE package is built. And now we as a Plong community are able to make builds of our own. Another thing is that you can include multiple styles in the manifest theme file for TinyMCE to include multiple styles. So we used in two projects, Mosaic without the editor. So why without the editor? Because the editor is not saying I want to show customers at the moment. Got better now at the LiveSix print. So you can now also, meanwhile, started back almost a year ago. It's all got better. But Mosaic is still a great way to do the user interface for Plong. It really works perfect. And Mosaic is more than just this editor where you can drag and drop all around. Because Mosaic changes the whole way how you build user interfaces in Plong. You have site layouts and content layouts and this is very powerful and even substitutes a bit Diazo at this point. Because it's just pure HTML. It's very easy to work with it. And you can have different site layouts for different sections of your site. You can have, if you use Lineage, you can have different site as for every sub-site. You can have one site that has different site layout. And the site layout is just an HTML with include content layouts, include other tiles. So I think it would be an own talk to show this in depth or tutorial or something like that. But it's really a great way to build the Plong sites. And as I said before, if you have new persons, new to Plong, they only need to know about tiles then. And it's much easier to learn than having all the concepts we have in like portlets and fuelets and so on and so on. And new programmers in Plong are much faster productive this way. That's really great with Mosaic. There's a question over there? Yes, the question is if you can set permissions on the tile level, a tile is in fact a more complex view. In fact, if it's only showing something, then it's the same as a view and so you can also put permissions on it. If you write your own tiles, you can do this. But in the editor, you can't do this. You have to go into like, yeah, or whatever to go into the Mosaic or actually go to the same tile code is and then change it there. Yeah. So we have to go into the configs.cml and set there the permission for the tile. Not you can't do it on an instance level. I think it could. Yeah. So you can do this also on different level. But at the moment it's on this level that it's, you can define it on the configs.cml as you do it with normal browser views. And there are also different permissions. There are edit permissions and there are more than just few permissions for this. So that's very, very important. And also now with the changes we made, we adapt the Rises Layout Aware interface. Layout Aware was the thing that in fact made the site a Mosaic Layout Aware site and that was in behavior. And the problem with this is if it's just in behavior is that you have to put this behavior on some context and what now when you want to render on an error page with your site layout is not possible because it's not layout aware. Because it's in behavior it was not possible to just put it on. So we adapt the Rises all the stuff and made it more general. And now you can have an exception and mark an exception layout aware and then your exception can get rendered and the 404 page or what you like and as an site layout. And that's, I think now we come to a point where we say we can have everything a layout aware and blown and don't need all the classical main template anymore if you want to do this now. So that's the base to do this. So that was merged at the sprint in Leipzig at the Mephisto sprint and it's now in Mosaic 2 I think. So it's already in there. And so some smaller things or some bug fixes we did. One nice story is that at some point we used Mosaic and at some point Hades said, okay, at some point my clone just stopped. The process was killed. And why? Yeah, just ate up all memory. Okay, and we looked around and found that Mosaic uses a lot of sub request because every tile is a sub request. And there was a bug in that it counts object or Hades modified and just multiplied it by two every time sub request was processed because of some bug in clone sub request. So that was like finding this bug like, wow. And now I think the speed of the hole is now much better because of this tiny, it was just two lines change or something like that. Yeah, but you have to find. Yeah, there were more. There were more. There was also a few permission fixing, then optional things on usage of the stuff. Yeah, it was a lot of stuff, just a few of them, smaller pull requests and bug fixes in different modules. And also we did a lot of cleanup as a code cleanup. I really am a bit like this guy that really wants to have everything pip ate. And you'd have ate as a top and I love this code analysis thing, this plan recipe code analysis because it tells me everything's nice. And if you don't have to do it, code works without it, but if new people coming to us want to learn clone, then it's much easier for them to get into it if the code follows a clear structure and even the code style is very important here. And every package I touched, I just formatted it on just a few lines, few scripts that just do this I sort and UTF-8 header and all the stuff automatically on it. It's not just for every package you touch, it's 20 minutes, it's working out, it's done. So some left, but yeah. Same outfits also as one of the heroes here, it also formatted a lot. So it's very important to write code this way and also to clean up our old code to make it easier for newbies to get in. We depended a lot on line edge and for example the Porsche Informatics ad uses 800 subsets made with line edge and we're using also a lot for the architecture-stiftung-austerech project with 11 subsets. Just 11. Yeah, but still we needed some things which were not working like now you can with a specific add-on which is called line edge counter panels. You can set and line edge counter panels together with line edge registry where you can have a local registry in the sub-site. You can change the settings for this specific sub-site via a counter panel setting and we had to change more or less the context binding of some of the counter panel views. This was just a quite tiny bug fix but still. So now you can not only open some of the counter panels which make sense to be used in the line edge context on the main-side but also in the sub-site. What else? It's gone. We also had to make some fixes with the routing, with the path setting of some widgets. For example, the related item widgets that it gets the correct path set which is based on the sub-site. This is also important for planar multilingual because it also uses site routes at this point on navigation routes and it's the same problem. It's not only for line edge then. We have no vocabularies that are, there's a subject vocabularies text that are also available only for a sub-site if you want to configure it. Some general bug fixes like implanted content apps. There were a lot of initialization code in the views, in the.underline,.underline,.init classes. These classes are always called for the layout menu which we found out and we removed some of those initialization codes to properties. Some other fixes. A lot of stuff. I can't show everything here. I think there was some blown up event enhancements and records proxy stuff that had bugs. This last one, the fix is out of memory errors with the plan sub-request. It was an important fix. Everybody likes it. Even in Diazo, it's not much faster. By accident we did not have unload protection in plan 5.0. Now we have, because of these bug fixes, you can, when you are configuring widgets with blown out from annotations, then you can now pass call labels and functions as configuration values which was not possible before. You can order the fields and fieldsets with blown out of form directives. I made some improvements to blown pattern slip and added some patterns and updated other ones. One interesting bug, we fixed in blown scale. We had a memory bump there when you scaled an image to a specific size which was smaller than the target size than the process run out of memory. And there was some other, yeah, the whole concept on the, how the scaling adapters looked up has also changed. And also GIFs were converted to JPEGs always. Change also gives the converted to PNGs. Or CAP now. And some commits that, it's now in the blown transform chain. The transform chain now throws events and notifies before, after all, or single transforms. So you can hook in there if you wanted to know if your Diazo theme is slow, why it's slow, or is it Diazo theme that is slow, now you can just write two subscribers and measure it. So, yeah. Yeah, lessons learned. Don't underestimate.0 releases. It was, yeah, okay. It was an experience. It took more time for the project than we calculated, so it was okay in the end, but at the end, but yeah. And sometimes we try to patch some stuff, but in the next release and everything is broken, so better fix it upstream and avoid branching things without merging them because that's also a bad idea. Branching, see a lot of people branch in their own repository space and then end up with this branch for years and then we can't get back to the same state. Yeah, and this is also good if you write codes that others can review because other people will find bugs in your refacto code and that's great. Yeah, and that's an advertisement for the contribute. Yes, please contribute. Yeah, contribute back to the core. Any way, project you're doing, contribute always back to the core. Be part of the WordWrite community. It starts with asking questions at communityplone.org, it's about filing issues at GitHub and in-depth just file it in product team of Plone, the issue tracker. Then we can sort it out anyway. Yeah, enhance the documentation. It's like a lot of people writing documentation on our native speakers, so native speakers will find bugs or typos or sentences that doesn't make sense maybe or something like that and it did make it better. Yeah, that's important also for your future self. So I ended up several times reading my own documentation after one or half year or something like that. For sure fix bugs, if you can do it. Yeah, create pull requests, look at the tests are running and then everything set and sign a contributor agreement. I think it's a conference and great opportunity to sign a contributor agreement if you not already have. And you don't need a contributor agreement for the add-ons, everything in collective, for instance, is open for contributions for everybody and also something missing in the readme, if something's outdated, just change it. That's how it works. And then the big thing is Plone improvement proposals. Plone improvement proposals are, there's a process behind you need to follow. This was a big new features breaking changes in Plone for, yeah, ripping stuff out of Plone if it's outdated. And if you have any questions, contact the framework team, people around and you get information where to start. Also at the moment we know we are on Python 2.7, we want to go to Python 3. There's a lot of people working on it at the moment. And Alexander just told me that restricted Python is a good way to go to Python 3. There are other packages already running on Python 3, so we are on a good path that we get there at some point. But even in Plone we need to do our homework also. And code clean up, readability, testing, all the stuff, also worth commit and pull request. I think that's it. If you have some time left for some questions, any questions? The question was if you can see any of the sites. One from Peter is public. It's swissbanking.org, right? Swissbanking.org. Sites from Portion Informatics are not public. They go public in two weeks with all sites at once, but always ten sites in one package. And the architectural forum site is not, it's in the content management. So people are just doing the work to get the content in. And it should go online end of month. So I hope they finish this. And we have also to fix some bugs. So yeah, but nothing serious. So it's a very fresh project. But swissbanking.org is online and you can have a look at it if you like. That's a good example. Also maybe for the multilingual stuff, how it looks like. Any other? Okay. Thanks.
Plone is known to be a great CMS and a great framework for CMS-ish applications. Because of the many improvements in Plone 5 we used it for our projects, even before its official release. Like with any Dot-Zero release, this comes at a cost: bugs, regressions, missing functionality. But this is open source, so we gave back and – together with the Plone community – fixed all the problems along the way to everyone's benefit. This talk is about the requirements we had, the problems we faced and the solutions we found on three example projects: - a multi-site project for the Austrian Architectural Forum, - a multi-language site for the Swiss banking association, - consulting for Porsche Informatik which produced an in-house solution for around 800 sites We heavily used Dexterity, Lineage, tiles and parts of Mosaic, sub-site specific local registries, folderish types, a shop system and 3rd party service integrations including a custom SAML2 PAS plugin. Plone 5 helped us a lot to meet and exceed the requirements. But due to the early adoption, we had to face a lot of problems and find solutions which we brought back upstream. The end result is again a Plone-the-framework as we need it.
10.5446/55283 (DOI)
Good afternoon, everybody. We are ready to start. I have, theoretically, I have 40 minutes. I have 39 slides. So I will do no demos, and I will probably go really quickly through each slide. So just pay attention. This is a semi-technical talk. It's not really a technical talk. It's a little bit about process and a discussion about a concept called developer onboarding. And my name is David Bain. I'm from Jamaica. And as I said in the last presentation, I'm enjoying the weather here. I brought three jackets, and I haven't had to wear any of them. Just adding weight to my suitcase. So everybody is good and ready? Okay, let's go. From zero to plone, developer onboarding, a discussion. So we're going to talk about the pain, a little background, and then the goal of this whole developer onboarding thing. So plone, it's easy to get started. A great system for junior developers, said no developer ever. So the thing about this, part of this whole thing is being approachable is a good thing, right? Part of the reason why babies are born with high voices and no facial hair and stuff, they're approachable, you can hold them and you want to hold them. You don't want to be that baby. And sometimes, plone feels like that baby. In fact, I've had conversations with persons, this is a student of mine who said, I'm ready to do plone. I've heard about it, I know you're beginning to plone, just tell me what to do and I'll go. And we'll talk a little bit more about that. Some guy named Calvin said to me, we really want senior developers. We don't want to have to deal with people who are learning, you know, too much pain. And there's a lady who did a presentation at a PyCon, and I think also at Ruby event, and she's a software engineer from San Francisco, and she said, there are two ways to get great engineers. You can steal them, or you can make them. I'm hinting at some stuff here. So what did I say to Jason, my student who wanted to learn plone, where I had to show him the ugly baby and hope that after seeing the ugly baby, he'd still want to use plone. So I said, you know, think of it as learning a new operating system. It takes at least three months to become productive. It's more like operating a helicopter than a bicycle. You know, these are things you say to people so that they understand there's a journey involved and there's an ugly baby. And I actually would like to not have to say some of this stuff. In fact, as far back as 2012, a plone developer named Miko Otoma, if I pronounce his name correctly, he made an assessment of the things that we should be working on to improve plone. And you can take a little peek and decide how close we are to that at this point. So one of the questions I'm asking myself is, can we make getting started with plone easier for new developers? It's not going to happen by accident. And if we can do that, it's going to add a lot of value to a lot of persons. Actually the developers themselves who want to just get started and start using it because plone does a lot of great things, but there's this barrier. And to be fair, some of the barriers get in the law, so I'm not pretending it isn't. So there's a little background. This guy named David Cadovy, some blogger that I saw online, he said, if you're going to make something, then become a connoisseur of that thing because that's when you begin to understand how things affect people. And if you're making cheese, you should understand what makes cheese taste good and things like that. If you're doing developer onboarding, you should start to identify what are the things that make it easier for developers, what are the things that cause barriers. And that wasn't really my goal, but I had a conversation with a WordPress designer sometime in 2013, and she said to me, how do I theme the plone site? And my thought was, that's easy, by then we had the ASO. So there's documentation all over the internet. Just go and find it and then come back with our ready theme. It didn't quite work that way, but it was a nice thought. So after I said that to her, I went and looked for the documentation, because I said, I'll just curate all this great documentation, and I'll put it in one place and point her there and she'll be off to the races theme. What I ended up doing was something more like this. I realized, okay, apparently there's no really good documentation on how to do plone theming. And I thought, oh well, but it's so easy, I'll just do up a document with how to do it. And this document is still incomplete working process, progress. But this is what I sent to her. It was just my notes about plone theming. It's public online for other people who wanted to learn. And this was plone four with the ASO. And thus began my journey, because I started to see the ugly baby everywhere. So I started to realize, oh wow, it's not that easy to get a build out working. It's not that easy to deploy. But I was blind to that because you see my background is Linux system administration. So plone is easy. But compare that to say my brother who did computer science, I did zoology. But he doesn't do Linux and stuff like that. So he doesn't want to touch this stuff. And so my goal started to become how can I make it easier for persons who are working with plone? And of course, there are lots of benefits to making it easier. If we start with the assumption that developers are probably the most expensive resource that you work with in terms of getting stuff built and that they're generally hard to replace, then you want to be able to utilize your developers more, not less. You don't want to be spending lots of time onboarding. You get a better return on investment, happier, more productive team members, reduced turnover, I borrowed that from the Kate Hedlestone engineer lady. And you can actually build trust. This is something that they found in research. It's a fager home at Al according to them. I'll say a little more about that as we continue. So here's an interesting aside. I searched for plone newbie on the internet. And a funny thing happened. Yeah. All the top results were coming from my blog, which was interesting. So let's talk a little bit about good onboarding in general. This is an example of typical employee onboarding. This is from the N of at MIT. We're very near there now. And they have a whole suite of tools and kits just to help managers to onboard staff. They have a body program, equipment and supplies, information, new employee email announcement, schedule for employees first day, and then they'd have the first week, the first month. All of this is not done by accident. They walk people into their jobs as opposed to what we tend to do, which is sink or swim. You know, we'll just hire a smart developer and make them look at the code and they'll figure it out at some point. But good onboarding is intentional and it requires empathy. It requires understanding to some extent. Some of the pain that you might be blind to because you've been working with Plone for a decade. So the goal, or let's talk a little bit more about developer onboarding specifically. And after 22 slides, it's about time to define developer onboarding. It's also known as organizational socialization onboarding. It's the idea of turning an outsider into an insider. Specifically in the context of developers, you want them to be effective with the stack that they work with. And the best way to do that is help them to understand your culture, your tools, the required skills and your processes. And if you do that, then you should be getting closer to success. And it's actually not as easy as I thought because I really thought that I would have everything in place maybe after three weeks of doing this and I've been doing this since 2013 and I feel like it's still an uphill battle. By the way, here's a quick poll. How many people here feel like they're a Plone insider? Okay, does that match my graph there? So this is a trick graph because I can tell you, just tell you which color matches which one. But generally what I've found is that even people who have been working with Plone for a good while don't necessarily feel or they're not sure, am I a Plone insider yet? Do I know stuff well enough? So we really should have some goals and figure out what are the things, what's the checklist of things that would make you a FEMA, an integrator, a developer, a core contributor and then have a checklist of things to learn and tasks to do, things to practice. So let's talk about the research and the practice. So basically we're talking about what people have actually done in terms of researching developer onboarding. It turns out, in preparing for this talk, I discovered that there is actually a gang of four for developer onboarding. Of course that's a little bit of an inside joke so I only saw like five people chuckle. That's design patterns for when you're coding. There's this gang of four people who have come up with a way of different pattern. Anyway, never mind. And I'm also going to talk about practice, which is people who are actually doing onboarding. In the real world. Very briefly, so this is a gang of four. Fager Home, Sanchez Guinea, Borenstein and Munch, I'm sure I'm destroying their names. But what they did was they decided they were going to do some research on Facebook's Academy, where what Facebook did was they brought in the core developers from different open source projects and brought them to, I think it was Stanford and had them work with newbies to help them to get onboarded with the software. I mean, the goal was to get them able to contribute to the software. Their study was to see how effective onboarding was and what people did and what would make it better. One warning, it turns out that mentors become less valuable to the project while they're mentoring. So they may commit a little less code and things like that. So that's something to bear in mind. But look at what happens to the mentor developers. The line at the top in the dotted blue, dashed blue, is those developers that were mentored. And as you see, they became more productive, not only more productive, but also more quickly. They became more productive. Whereas the non-mentored developers, yes, they did move up, but they were not as productive as quickly. I apologize. I don't remember the metrics that they used to measure this, whether it was commits or that type of thing. But the general idea, the general takeaway is that mentored developers do better than non-mentored developers. And they came up with recommendations. They suggested that core developers should participate in mentoring activity. There should be face-to-face events. Interestingly, clone has a little bit of some of that. The Google Sum of Code provides some options for mentorship. And face-to-face sprints do exist and they do happen. And they also warned that onboarding is not always immediately visible. So you need to kind of measure and stay engaged with the newcomers so that you can begin to see how are they progressing and so on. And what about the practice? What are people who are involved in this type of thing saying? Well, we already said that it produces developers faster. And it's important that when someone is getting on board, they understand the model properly. If you don't understand the model properly, you can do all sorts of strange things. Implone that's even more important because we're getting people who are coming from PHP, running on the file system, what is in the file system basically reflects what's coming up on the website, whereas we have things like Traversal and stuff like that. We're talking about people coming from SQL databases and now they have to understand the model of an object database and things like that. In terms of the how for developer onboarding, the general practice often includes some type of checklist. In fact, in the Drupal project, they have something called the Drupal ladders. I don't know how activities know, but what they do is they create checklists for different types of things that you need to get through. And I think at one point they even organized meetups where people could go through the different ladders. I started something at one point, which was, I can't even remember the name of it now, it was like, no, not that. The idea was a kind of dojo approach where you had some clone exercises and today we're going to go through installing, setting up buildup. Today we're going to go through setting up dexterity. Here are the steps and then people would come up and demonstrate it almost the way that you would in a dojo repeat things like that. That was an idea I had, but I only have so many hours in the day. Another common thing that I picked up from somebody who did Ruby on a Ruby blog is how can you simplify the setup? Once you get to the checklist, sometimes some of the things in the checklist can be distilled into automated steps. If you can do that, you can go faster. And a new person needs to know less to get going. Lead blocks are useful. For example, we have the ASO snippets library so that yes, you know the ASO, but now every time you want to do something, you have to think about it. What if we can give you a couple snippets that do common things? And the typical development cycle for a developer is they have to get their build environment, the development environment going. Then they need to customize code, test it, and then deploy to production. This is a very simplified cycle, so please don't cringe. Developers are users too, and this is why developer onboarding is important. There is some more discussion by someone involved in developer onboarding or onboarding in general. In general, people don't want to think. Even developers don't want to think when they want to get started. They don't want to think about whether this build out needs XYZ or ZKH or whatever. They just want to get started and get to a hello world. If your hello world is taking you three hours, you may not be encouraged to continue. That might be considered a problem. And so the whole theory around this is least amount of work as possible. Defaults should let people do less work and get more done. Present features real people want. And those features happen because you do real user research. So it means you need to find out what do they really want. And examples are always better than descriptions. So it's always better to have a simple example of how this works rather than, you know, well, the ASO is this thing that is used for theming and so on, so on, so on, so on, so on, ten minutes later, you need examples. And there's this concept called progressive disclosure. The idea is just show them what they need and if they need more, there will be options. And so you just focus on progressive disclosure. And I think some of these concepts can be applied to the developer. So for example, since descriptions aren't enough, I'll soon get to an example of progressive disclosure in the context of other things. So here's a quick quiz, which part of the development cycle adds the most value? This is partially my opinion. Is it? I should have had ABC. How many people say the first one? Okay. How many people say the second one? It would also be interesting to know the background of these people. How many people say the third one? Okay. I think we got most hands for the second one. And I kind of feel that the second one gives you more value because you could really, most of the time, setting up a development environment can be distilled to an automated process. If you can put in the right scripts and everything and know the target environment, you should be able to press a button and be ready to write some code. Writing the code is added in value because once you start to write code, you can bring value to your customer. Your customer can send an email. Your customer can track information on their website. And deploying to production obviously adds value, but if you're deploying code that doesn't do anything, then not really. So if you can automate the deploy to production, then that's great. Then imagine a situation where you could get up and running and deployed in less than two hours, like someone who knows nothing about clone. What would that look like? Well, here's one possible example of success. I call it Sandy's first day. So Sandy is going to have to work at a clone company, and she's never heard of clone before in her life. But David from AltaRu says clone is the greatest thing ever. So she frantically, the week before, decides she's going to find out about clone. So after having breakfast and everything, 10 o'clock in the morning, she Googles for clone and she finds a clone new developer guide, 10 o'clock one, she happens to be running Linux box. So that's convenient. So she runs pip install, and then she runs this little recipe, and she launches her clone site 10 minutes past 10. And at 10.40, she sets up a theme from a repository of a couple sample themes, and she customizes some content types with dexterity and stuff like that. And she customizes themes a little bit more because she read about the glass rules that are available on it. And by 11.15, less than two hours, she runs PlotDeploy, and she points it at a digital ocean server, and she has a live site. Now, that would be great. So here are some of the tools I've picked for making that happen. Talk, because it is really, really, really, really, did I mention, really easy to get started. There are some gaps in documentation and there are some non-standard things. But those could be changed. Mr. Bob allows you to create some nice templates. So if she was working with a particular company, you could create templates to get started. would be great is some type of marriage between Plop, Mr. Bob and those Ansible Rules because then you'd be able to do that Plop deploy thing which doesn't really exist yet. There are resources now like Trainingplown.org so there would be a quick quick start tutorial that she would have found and Gloss is like a layer on top of the ASO which is like what bootstrap is to layouts. Use CSS classes instead of the ASO rules to do your layout and if you actually need to dig into the ASO there's the ASO snippets library so after a day she should be able to do quite a bit. The whole idea behind this is one of the things that I think when I'm thinking about this stuff is what's the least amount of vocabulary needed to get stuff done. In other words do they have to know the words the ASO mosaic build out pip and then study to be able to pass an exam and know all these things or can they just learn a few commands and use things they already know and be productive. So in conclusion I don't know if you know about this guy. This is a good example of getting the model wrong. So apparently this guy didn't realize that the war was over. He didn't realize for a couple years like I think it could have been 10 15 years so he was in the jungles in the Philippines and he actually killed some people who came into the jungle. When they dropped flyers to tell him that the war was over he thought it was a ploy by the enemy and he subsisted on coconuts and stuff like that all those years. His commanders gave him an instruction that he should not surrender he should not die by his own hand and he faithfully followed all of that. Eventually they managed to convince him that the war was over without him killing them and he got out. He returned to Japan and he started... I don't want to give the details wrong but the summary is when you have the model wrong when someone isn't properly on boarded we can lose a lot of time because the way that they think about the problems are wrong. Somebody who's coming to this is starting to say well is the SQL server running? What version of PHP are you running? You can literally have a developer working for three months and not be productive because they haven't gotten the benefit of understanding the pieces. So I think in conclusion it's important that one of our investments in developer onboarding is getting the model right. These are some references so most of the things I've spoken about once this talk is available you can get the references. And thank you. Are there any questions? Comments thoughts? Right now that I understand rapido I think it's great. And it's actually easy. I think I was thinking in fact there's somebody out I know who is coming from PHP and they found their way to Web2Py and they kind of like it and they've been somewhat productive in it and I started thinking well that might be a faster way to get them productive in prone with something like rapido. So it's something that I'm actually looking into. I want to kind of explore how far you can go with it before you have to start doing browser views and other things but I like it. Exactly. Exactly. Exactly. And so I have started to use rapido not anything big just needed to put announcements in the head of a site. So we just use rapido with the other rule to inject an announcement onto our page and it means in terms of maintenance I can quickly show somebody and say they're in the theme you can change this and it will change the announcement if you want to switch off the announcement just edit the HTML and it's fairly maintainable. So I like that. Yeah. Okay that means everybody is ready to go to the next talk. I have an idea of story for the first women of art. Sandi's first day. Yeah. Right. Yeah. Yeah. There's a threshold. Right. Extremely complex and changing also. I know. I know. I know. I know. I know. Do you have any ideas how to make onboarding? Do you think onboarding should always go that way that you give the feeling of success to people on you by letting them do stuff through the web like you suggested and then telling them okay if you want to step further you have to learn about the HTML. Yeah. Yeah. I think there's up there's value in pain. So so let them enjoy the through the web and get as far as they can with through the web and then just enough pain with enough of a goal if they can get past that then they'll be fine with the rest of it. But if there's zero success after the first two weeks and I had a conversation recently with someone who's a dotnet developer and I've up to that time I was saying yeah productive implon is about three months and you just accept that and you invest the time and get someone up to speed. So I said I asked him how much how much time does it take for your guys to get going and he's like what they have to get ready on the first day you know. Well yeah I thought both. That's Jamaican. So you know there so then I started thinking whoa these guys expect somebody who doesn't even know the stack to be able to do something useful in the first day and I'm telling people that we're spending money on interns who won't be productive for the first three months and I had to step back and think okay maybe I need to review maybe Rapido is a good thing and yeah so that's that's part of my perspective. Okay thank you again.
The heart of this talk is about making it easier for new developers to get started with Plone. For more than 3 years David has been conducting ad-hoc experiments related to developer onboarding. During that period he explored ideas such as Plone Drills, a Diazo Snippets Library and Chrome Plugin, cloud based installations of Plone development environments, and a Plone Newbie Developer Toolkit (Plone 4 only); he contributed to Alex Clark's Plock Plone Installer project; and he spent a lot of time on a theming approach for Plone which "extends" Diazo called Gloss. He also created a short video series related to using Gloss with Webflow. This talk aims to share David's explorations and perhaps make some recommendations on a way forward for improving the developer onboarding experience.
10.5446/55284 (DOI)
Alright, so I am here to talk to you today about getting things done using one of Plone's what I would consider lesser known but more useful features, content rules. When we talk about content rules, we need to ask ourselves first what is a content rule. To answer that question, I would like to talk first about what is content management because content rules are really tightly tied to this idea of managing your content over time. We have content in websites. Maybe we have pages that people have created that they want to have put out there into the world or maybe we have event listings for conferences that people might want to attend where they need to know that there is going to be important talks about fuzzy puppies and content rules and things like that. Maybe we have breaking news that we want to expose to the outside world, something really important that is happening or perhaps we have products that we are trying to sell to somebody that we want them to be able to get access to. In addition to all of this content that we have created, we also have users who come and interact with our website. So we have folks who are users of the website, we have folks who are administrators of the website, we may have folks who are members of the website and are allowed to create or manage content over time by publishing or editing or doing other things like that. Our users can interact with each other in social ways using the tools that we build for them into our website or they can take actions like publishing content out to the world at large. Really it's all about them creating things that they want to share with people or editing things that they've created, publishing the stuff that they've created so that the general world can see it or a controlled audience can interact with it. Creating those things around to reorganize them over time and even deleting content when it's no longer of any use to them in order to clean up the material that you're showing to people and make sure that you're not disseminating information that's no longer valid. All of these actions that users take are in a sense events, things that happen in your website. Maybe we create a new user or user logs in or somebody creates a new page or somebody decides to publish something. All of these things are events and we can take events and tie them to actions that our website takes automatically. This is what the idea of content rules is. Events that your users, the people that are interacting with your website are taking on their own which then cause other actions within the website to take place automatically. We call these content rules and it's a little bit difficult to kind of explain them without actually just giving you a demonstration of what a content rule is and how you might use them. What I'd like to do for the next couple of minutes here is talk about fuzzy puppies. This is the fuzzy puppies website. It's a very friendly website. You can see there's all sites of nice things here, upcoming events that somebody might want to get involved in, lots of beautiful content that we're sharing out to the world about our puppies and how fuzzy and cute and happy they are. As we're managing our content, maybe we decide that the fuzzy puppy show is no longer applicable. Maybe it got canceled. The fuzzy puppy headquarters got burned to the ground or something like that. We are no longer able to actually hold our show there. We want to make that content disappear. Maybe we also want to hold on to that content because we don't want necessarily to delete something. Maybe we want to keep a historical archive of the materials that we've been working with over time. We can log into our website as the manager of fuzzypuppies.com here. We're going to log right in. Once we're logged in as the manager of the website here, we can go over and we can click on our login, our personal tools button, go to the control panel at the back, and we find ourselves the content rules control panel. Here we're going to be able to set ourselves up actions that we want to take place. But let's go back home to fuzzypuppies for just a moment here. What we'll show you is the contents of the fuzzypuppies website. We also have, in addition to the content that we're doing, a little folder here that we're going to call obsolete items. This is a place we want to put our archived content, things that are no longer relevant to the outside world, but that we might want to hang on to in an archive kind of style. Again, we'll go back to our site setup as the manager or the site administrator of the website. We'll get a chance to go here to our content rules tab. What we're going to do is we're going to add a content rule. Let's talk for a moment about what our goal here is. What we'd like to be able to do is to allow our editors, when they make a particular workflow transition, maybe they want to mark a piece of content as obsolete, they should be able to make a transition in workflow that says this is obsolete. When they have done so, the content will disappear from where it is, will get moved to the archive folder. It will get changed to be private so that it isn't visible to the outside world anymore, but we still have a record of the fact that that thing existed at some point in time. We're going to add a content rule in order to accomplish this. Into our rule, we'll say that maybe this is the archive content, content rule. We will say when a user makes content obsolete, it will be archived. What we need to do first is to pick the event that we want our users to take that is going to result in the action that we want to have happen. For us, what we would like to do is to listen for instances of the workflow state changing. There are a number of different actions up here. We'll see what some of the other ones are, but for us, right now, we want to think about workflow state changing. We set up our rule to happen whenever a workflow state has changed, and we'll say that our rule should be enabled, and we'll save that rule. This is not yet a fully functional content rule. What we've done is we've said that whenever a workflow state changes in our website, we want something to happen, but we haven't said what we want to have happen. We also haven't yet really narrowed it down. We don't want all workflow state changes to result in this, only particular ones. What we're going to do is we're going to add a condition to our content rule. The condition that we're going to do is to listen for a particular workflow transaction or transition. Whenever a particular transition is made, we will take an action in response to that. We're going to add a condition for workflow transition. What we're going to do is we're going to listen for the make content obsolete transition. This is a lightly customized copy of Plone's own built-in workflow. I just added one workflow state transition here called make obsolete. We're going to listen for that. That will be the condition that determines whether or not our content rule is applied. Next we need to take an action. What we want to do is over here in add action, we would like to move our piece of content to a particular folder. We'll say add for that. Then we're given the opportunity here to go searching for a place where we want our stuff to go. We would like it to go into the obsolete items folder. We'll click on that as our location and we'll say save. The other thing we want to make sure of is that once we have actually moved this thing, it's no longer published so that it isn't visible despite the fact that we have moved it to somewhere else. We are also going to transition workflow state on this object and take two actions. We'll say add. We come here and we say we would like to make this piece of content private or hide it and we can save it. Now if we're logged into our website as just a typical user or an editor, somebody who has the ability to manage content over time, our user can come back to the homepage here and they can say, oh, you know, the fuzzy puppy show got canceled. The fuzzy puppy lodge burned to the ground. It was a horrible, horrible event. The show isn't going to go on. The puppies are very sad and so are we. What we need to do is come to our workflow here and say we would like to make this obsolete. When we do so, it has now changed its workflow. Whoops, I forgot to do one more thing. You notice it didn't move, right? The reason it didn't move is because although I have created this content rule, I have not applied the content rule somewhere. It is active but I must tell Plone where this content rule should take place. I will go back to my home folder and I will come down here to the rules button in our interface and we will say we would like to have the archive content rule be active at the root of the site. We're going to add that and by default the content rule will apply not only to this folder but to any subfolders that are inside it. Throughout our entire site, we can also disable that so that it only happens within a particular location. We can also temporarily disable the rule in this particular location but for the time being we're just going to leave it exactly like it is. Now we'll go back to our Fuzzy Puppy Show. This time we'll say for real we would like to make our content obsolete and when we do so, you'll see that now we are looking at the home tab and that's because if we check our contents, the Fuzzy Puppy Show is actually now inside obsolete items. It has automatically been transitioned. You'll notice it is also private whereas before it had been published. This is a useful case. In fact, I used this myself in a website that I used to run for the University of Washington. We had a department whose job it was to manage grants and grant applications and so on and so forth. The folder that they all did their work in was a folder that they had access to all the items but the grantees, the people who were actually writing out the grant proposals, were only supposed to have information about the documents that belonged to them. These documents were placed in a private state and then shared individually with the owners of the grants, the primary investigator. What happened quite often or at least once before we yelled at them and figured this out was that one of the persons who was managing the grant accidentally pushed the publish button and took all the information that was there about the finances of this particular grant and accidentally published it to all of the people in the department. It only lasted a few minutes before they figured out what they had done but it called into question whether or not we should really allow them to do things like that. It really caused a problem. This brings me to the second application that I'll have for content rules here which is that I don't know that you've noticed this but our Fuzzy Puppies website isn't really just about fuzzy puppies. It's actually also a front for the New World Order and for a secret plans for world domination. Because it's a front for this and there's all of this information in here about our secret plans for world domination and Skippy the squirrel who is our dear leader and so on and so forth, we don't want this material to be published. This is vital internal stuff that all of the core members of our site should be able to see but we don't want the world in general to be able to see this. So we would like to make it so that somebody can't accidentally go and press the publish button and suddenly turn all of our material that is about secret world domination public on our website so that anybody can find out that fuzzy puppies are in fact evil and seeking world domination. So we're going to go back again to our site control panel here and we'll go to the site setup and we're going to add a new content rule. And this content rule is going to have the purpose of making sure that nobody ever publishes anything in the location where it is applied. So we're going to add a content rule here and we'll make this the keep it secret, keep it safe rule. Don't let them find out about our plans. And again, this time what we're going to do is look for workflow states that have been changed and we'll have this rule enabled and we will say save and then we're going to say the condition for this is that we would like to look for the final workflow state to which we are going. And we'll add that and we'll say okay, anything that is publicly visible or visible to people in the outside world that might be a place that we don't want something to be seen, we're going to select all of those workflow states. So any end state that is not private, we would like to listen for events that move our content into those states. And what action do we want to take in response to this? Well, we would like to have a transition of workflow states from whatever it is that they set it to private. So we will go ahead and move it back to private immediately. So we'll save this content rule. This will keep all of our stuff safe. Nobody will be able to see our secret plans and we need to go over to the secret plans folder and to our rules tab. And then when we're on the rules tab, whoops, here we go, yeah, content rules, we are going to select here the keep it secret, keep it safe rule. And we'll say for this particular location, if anybody should attempt to publish something, that thing will automatically be transitioned back. But wait, that's not enough because I've got lots of users and I don't want to just prevent them from publishing things without also telling them the reason why. So I would like to actually go back to my site setup and add one more thing to my content rule here. I'm going to come down here and I'm going to say for my keep it secret, keep it safe, in addition to transitioning something back, I would like to notify the user who took this action. So I'm going to say I'd like to add something here and my message is going to be something like, hey, stop. Stop. You must not publish our secret plans. And we're going to make this a warning message because we want to be stern about this and make sure our users understand it. So we'll save that action as a second action to take here. Now, if we go back to our secret plans and somebody accidentally happens to click on this and say, well, I like this secret plan, I would like to take the secret plan for the New World Order and I would like to publish that. So I'm going to promote it to a draft and let the outside world see it. And what will happen as a result is that it will be transitioned back to private and I will get a warning that says, mm-mm, don't do that. If I could make like a little screen pop up with the little death's head and then no, no, I would do that, but I don't have that ability quite yet for my workflow rules. So content rules can be very useful in kind of dealing with the sort of human resources issues that come with a content management project. This is a solution to a problem that's a little bit less technical than hiring a developer to build for you, a custom workflow that prevents publication in this one particular location. You can use content rules as a site administrator or as a manager to cover your bases and to try to conquer some of those kind of human management problems that come along with any content managed website. Returning quickly to the presentation here, then I'd like to point out that on the Plone site there is really good documentation for content rules and how they work. I'd also like to point out that content rules themselves are actually extensible. The built-in action triggers that are present in Plone itself out of the box are numerous and plentiful, but there's lots of other things that you might want to listen for. And there are add-ons that are available that will extend both the actions you can listen for or the events you can listen for rather and the actions that you can take and the conditions that you might use to narrow down your content rules selection. So if you have any questions, I'd be happy to take them at this point. John. The second rule, does it write the database? Does it actually transition the object and then transition it back? It does actually transition the object and then transition it back. Yeah, it's a totally inefficient and awful way of solving this particular problem, but it really works. And when I ran into this problem in radiology, if I wanted to fix the problem by recreating workflow, which I could totally have done. I'm a Plone developer after all. It would have taken me a day or so, maybe two days to work out all of the issues, to get the workflow set up correctly. And this was a problem that needed fixing right now. And I went and set this up in our website and it took me all of three minutes to create the rule and to apply it to the folder where that rule was supposed to be applied. And that was it, right? Now the job is done. The rules are being applied, the place I want them to be. The problem is fixed. There's a band-aid on that bleeding wound. Now we can go and fix the problem in a better way, right? But if you don't have a developer at hand or you are not yourself a developer, these can be very decent ways of solving some of those problems that come up in managing your content. Yes? Can you give us some other examples where you would use this? So that's the biggest one that I've done publicly. The third rule that I was going to show was to set up one where when somebody logs into the puppy's website, I send an email to myself to let me know that somebody is logging in. And that way you can actually keep track of who's logging in and logging out of the website and when they do so. For example, some of the actions that are available to you when you're creating these rules, if we go back to the site setup here, some of the actions that are available to you to take in response to content rules include things like writing information to a log, right? So we can log when users are created or log when users log in, log when particular pieces of content are deleted. If you want to establish an audit trail for the people who screw up in your website and accidentally blow things up, this is a very useful thing to be able to do. I could imagine also potentially putting something on like the changing of a view, right? Or the display method for a particular folder to say whenever somebody comes in and changes the way that content in this area is displayed, let us know so that we can, you know, warn them sternly about doing that because as Franco was pointing out earlier, it's not infrequent that somebody will accidentally break a view by changing to something that they don't know how to get back to the correct view. Of course you haven't. Not you. Not you, of course. Charles on the other hand. When they crash the site because they change the view to full summary of all items and the folder has hundreds of items. There's another very good one. So there's definitely other uses that one could put this to. The number of actions that are available directly doesn't seem like a lot right off the bat, but there's a lot of things you can do with just these actions to be taken. Wayne. I have two other actions that I needed so I haven't brought into one as mail to group which is very useful. Absolutely. I hope you get that. There's one little bug in it so hopefully he'll fix it this Saturday. Nice. But that's really important. And another one is, oh, a reminder. So if you have an event and your event, you set it up for today and that's in a week, you can set it up to send an auto reminder a day, two days, three days, whatever before. Oh, that's really nice. Cool. So those rules are really important. I want to send you the links because I want to get them out there. I didn't get them to. No, terrific. We absolutely should get those out there and get them up and published. The add-on ecosystem for this is still developing for Plone 5. There are some of them that have been updated. They are all available for Plone 4, the ones that I was showing from the PIPI website. But there's a lot of good things that you can do with content rules. I find them to be, content management is more a human problem than it is a technical problem. And I, as a programmer, have a very hard time admitting that because I would like every problem to be solvable through technology. But that's not really the case. It's actually the case that most of the problems you face with these kinds of sites and these kinds of projects are human. And this is a human-facing, relatively straightforward way of solving some of those problems without having to engage the services of a programmer. Yes, Wayne? I was using something like FOSA in the community and they were saying it was not a good thing to do. So, the way it is now is to create, if you're outside of an event tab, if you're anywhere else, and if you create an event, it goes in that folder. Now, it'll show up, I think it shows up in the event because it's a collection. I'm anal and I want stuff to be in the right place. You'd like them to be created in the right place. If you know refrigerator closes, I want to know the lights out. So I use the move to folder. In other words, in this example, if you, anywhere on the site, if you create a content type called event, it gets moved to that folder. Because we can have a condition on the type of content that you've created. Use move to folder to just transport it magically to the right place. They were saying it's an issue that means with permissions you can create. If you create an event, it doesn't matter where you create it. Because you have permission to create the event. Moving it, I don't know. I can, about where things should go. Yeah, I mean the typical clone solution is simply to say that you cannot create events unless you're sitting in the event folder by using restrictions. But that's the problem, right? Then you have people who come to the site and say, well, I can't create an event. Why is the site broken? And it's difficult to explain to them that it's not broken. It's just that they're using it wrong. Exactly. So I don't know. I mean, Stefan, you look a little ambiguous about the whole idea of moving content on creation time. So, well, it's tough, right? Because we have this entire permission inheritance going on. Yes. And it might do things that we did not intend to do this way. Yeah. So you do have to be careful about the way that you're rigging these things up. But I think if you are cautious and careful and think through the implications, it can be a solution. Yeah. Joan. I don't know what this title is in the plan 5, but you are the one who has to create any form that you like, a clone form, Jen. Yeah. And when you sign the form, you make it create some clone content. Yes. The U.S.P.F2DC, I think it's called? Yeah. So then you've got. Really lovely name that one. But it's a great add-on. I just can trigger. Well, this is the other thing, right? I mean, there are certain situations that you can get into, especially if you have rules which trigger other rules where you can end up in loops that don't terminate. So you have to be a little bit cautious about when you use, in particular, the buttons down here that say, stop evaluating content rules after this action or the actions executed by this rule can trigger other rules. Those are things that you may need to fiddle with in order to get the site to behave the way that you want it to. What is the first one? The second one means? The first one means that if this rule, when this rule finishes, if there are any other rules that were scheduled to go off, don't do them. This is it, right? This is the rule to end all rules once it's finished, processing of other rules finishes. So does that mean if, like in a folder, if you had multiple content rules, if this was the first one, it would prevent other things? Yes. Yes. I think that's about it. All we have time for, we went a little bit longer than I was supposed to. I apologize for that. But thank you very much for coming, and I hope that you find useful applications for content rules in your Plone website. Thank you. Thank you. Thank you.
A demo of Plone's through the web tool for making things happen based on content actions.
10.5446/55288 (DOI)
This talk is about a Plon API, a simple API for signer feature. So that's me with the coconut. I'm on GitHub or Twitter. I'm Jill Forkada. I work at the Afritech, a German weekly newspaper. And I'm part of the, I've been around here in Plon community for a while already, six, seven years. And I'm part of the testing team. Before diving into, actually, what's Plon API or so? Just a quick survey just to see what people do know about already. So who knows Plon API or what's Plon API? Okay. So who uses it already? Okay, perfect. So Plon API, it's a Python distribution for Plon, of course. That tries to provide a simple API to interface with Plon to cover roughly the usual 20% of the tasks that any Plon developer needs to or uses on their regular tasks, 80% of the time. So like the regular bulk of interactions with Plon that you need, most of them are covered here in a nice API. Actually, I haven't done much of that, I just talking about it and cheerleading it because it's a great package. So actually, that was done during these sprints some years ago, especially if you happen to meet with Zupo, just congratulate him because that was an amazing idea that they came up with. But especially all the people involved or mainly all the people involved on that, on those two sprints were the main ones behind it. So why they did it? To begin with. So you have seen that Plon already celebrates this year, the 15th anniversary. And the package behind or the application server, Zup, that Plon uses, it's even older. So of course, 15 years ago, there was a cool way to do things. Three years later, there was a new way. Five years later, yet another way. And you end up with 15 ways to get the exact same results, but in different parts. So that's for anyone that's new, that's kind of like a bit of a scary or uncertain, like, okay, I'm trying to get a user this way, but now I'm checking this other addon and is using it in a different way. Is my code right? Is that wrong? Is there any performance benefit of one or the other? Is there, what's, why there's so many ways to do things? On top of that, there's always inconsistencies, like there's some APIs that use username or others use user ID on the year method calls. And at the end, Plon has almost 300 packages, all in all. So if you want to start really learning, so where do you start? What's the basic functionality that I can get from Plon? How to bootstrap your teams to get productive? Oh, you're here. So here's the one. You should be doing the talk, actually. So that was basically the main, like, as I was saying, how do you actually get the Plon side route? Actually, that always happened to me. I never, ever remember what's the actual import that I have to do to just call that method that gets you the Plon side route. I never, ever learned that by hand. Or checking permissions, if you are checking permissions for an object or just for that permission or in a different user, all these, it's just, at least for me, it's just impossible to remember all the APIs that I need to use. So fortunately, a Plon API come to the rescue. So what I was saying before, the goals are, or so far, have been to cover only, really, not everything because then you are just building yet another layer on top of Plon. That's not the idea. The idea of Plon API is not that it's your only one shop for everything. Everything will, the idea is that not everything will be there because there's no really need. And then the APIs will, anyway, will be just so bloated that the point of making it easier for newcomers and making it usable and approachable, then you miss that completely. But really core features that Plon API provides, those are, like, really the ones that you use on a daily basis and that are really time savings. And for that, it provides a clear API methods, you will see in short. Everything is in one place, so you don't have to remember, like, what any of those 300 packages were, okay, so it's permissions. So probably that's access control, but well, but that's maybe the permission is somewhere else and so on. So not everything is in a single place. You can't really, as a fillet was showing us before, it's really important that when you are looking at something to see where actually the code is, where the implementation is. So keeping everything easy and in the same package, it's really easy just to browse the few files, the few Python files and all the methods and see what's actually there, what functionality is bringing. And especially for Plon, that's like a rather obvious, like really something strange, but it's a really nice API, just like, for example, if anyone has used the requests library to do some HTTP request, that's saying that's so nice to do HTTP. If you have to use HTTP or URL, or some URL, two or three, or I mean, they even make three versions of your relip, so that speaks. And Plon API is probably the first and only API so far in Plon that's really Pythonic, that really feels comfortable to type to when you look at the code and you see some Plon API code there, you basically it speaks by itself. You don't really need to put any comments. You don't need to get into the methods to learn what's actually doing, that everything is there. But the, so how did they get there? How they came up with not only the Plon API, but what need to be there and what and how to do it. So they started with, that's probably something crucial for the Plon API project, that they didn't start writing any single line of code. Instead they did document drive and development. So first of all, they said, well, we are using Plon. We know that it's so complex and everyone is telling us that, so what do we want to get? What do we want to document? Let's come up with some descriptions, some ideas of what we would like to have. So we want to be able to get users. We want to get the portal route. We want to manage content. So those will be the building blocks. So they created first the documentation with all the functionality that they would like to have without even writing a single line of code, just dreaming or thinking out loud or writing it down, actually, what they wanted to do. And only after they were happy with that, then they started document drive and development and testing drive and development so that everything was tested. Actually the documentation itself of Plon API, it's testable already. So if you modify the documentation and you make a mistake, tests will fail. So you will know that you have made the mistake on the documentation itself. And even so the path, first making documentation, then making the tests together with the code, that was really like setting the path to really get a good project and a good code base and a good organization of the code. But especially that's supposed to be style drive and development because if you're starting a package that was done by the community, it was not a code drop from a company like, hey, we come up with this, there you go. But instead it was something done by the community. So community is someone from here, from Boston, somebody from New Zealand, somebody from India. So you need to set some style guides. So they come up with a code style guide. They come up with a release code tool. They come up with how to contribute and all of that is on their documentation. And even further, they went even a step further and pushed all of that upstream to the community. So now what started as being the code style guide for the Plon API, now it's the Plon's style guide. Of course, not all the code has been changed. There's an ongoing effort on that and you are most welcome to join that. But it can feel that it's not so much relevant. At the end, you just want the functionality to work. But setting a style guide allows you that whenever somebody comes with you with the laptop, oh, I have a problem and then you see the code. If the patterns are recognizable, then it's far more easier. Just like it's easier for probably most of you to understand any Latin language rather than a Chinese language because you see the patterns, you see the spaces, you see the puntuations, you see the breaking points. Instead, Chinese or Japanese, they don't have spaces in between letters because they don't have the concept of letters either. And that's way more easier for you to understand if you already know other Latin languages. So if you keep a style guide and you enforce it and you follow it, then all the code will feel the same. All the code will be far more easier to change, far more easier to adapt, far more easier to read, which is at the end what the most time we spend. We spend way more time reading than writing code. So just some quick examples and that's easier to remember. If I want to get, so I mean, that's, I cannot even ask you what's actually that doing. It's already telling you. It's the API to get, you get the portal, the BlonSite portal. That's it. It's easy to remember. It's easy to get whatever you spot that inside any code. You know what's that doing instead of all those random imports and so on. If you want to see if Ava can build a portal, then that's all you have to do. Check the hash permission on the user. And still, for example, here, instead of creating a variable to get the portal and then assigning it on the object you can already do, or it feels natural to use that call here because everything it's, so it's not so, as it's not anymore so much complex to get the portal, you can embed it inside other function calls without actually making it that bloat and all things that intricate. That's something that I love from Blon API, the copy one, because if you actually look at the implementation of content copy, you will see that it's just madness. What you have to do, you have to get the, you have to copy the, or you have to copy the object and you get the clipboard and then you have to base that clipboard somewhere. It's just so much verbose that you miss the point that you are just copying and that's the key here. You just see, well, it's three lines of code, but it's a single method call that does the copying. You don't have to, like, you don't have to go all around with the clipboard and then maybe the clip you have to pass it around and then you way more simpler where more say it gets you done and be with it and deal with the things that you, that do matter the most. But there's, I mean, with Blon API, it's just like a can dist or everything, it's good and everything, it's, everything, it's good. Everything, it's nice and there's always, when you start with good things, then just new good things come. And I'm not sure if any one of you has been looking at the pull requests sent to Blon API. No, sorry, that's, that's later. So yeah, so good thing, good things that comes with Blon API. I'm actually, I did, I did that by myself, I, at some point, I, I started moving all my code to use Blon API whenever I could. So that's why actually I wrote this flake eight Blon API plugin. So if you, if you use a flake eight or Blon recipe code analysis and other package that runs flake eight as well. And this will give you hints that you are using a method call somewhere that could be changed to Blon API. So, so reducing, reducing the amount of packages that you depend on first and cleaning and making your code far more readable. And I was, I was doing that at, at work, like every now and then spending some, some, some minutes, some hours, changing that and the code afterward, just looking the before and after it's just like so much refreshing, so much clean. And especially if you have, if you don't have to look for yourself at all of that, but instead you use, you rely on flake eight to, to report for that then it's, you just need to really know what have to be changed or you already know where to go to, to change things. But then, and then as you use Blon API and the API, it's so nice then if you have your, your spaghetti code thrown there, you start seeing like those little gems here and there and then it gives you the, the, it empowers you, it makes you feel bad for your code. So you want to have a, a say no, a nicer API, just like a Blon API. So it's not just about replacing the code, but also actually making your code even better. And just that's like a perfect example. Or if you ever need, if you ever need a reference to look at how to do things or so, Blon, Blon API, it's the, it's a perfect, it's a perfect package to look for how to do everything. All the best practices are encoded there. And that's what I got before a bit ahead. And those are pull request numbers and done by completely different persons, one or one of them even needs by me, certainly not done by the, by the creator of Blon API. That's, that's the, that's the great thing. And all of them have a, I mean, I just grabbed the last few ones. But all of them have these, these, all of them do change these four files. They add a change log entry, they modify and expand the documentation explaining what the new functionality that they are being, uh, is about. They do add tests. And finally, of course, they do add the, the, the, the implementation. And if you look at the average pull requests in Blon, now, because we have something to enforce it, but they do add, uh, or people do add the change log and of course the code. But nearly nobody is adding, uh, the documentation and most, but not always people is adding tests. But the nice thing with Blon API is that you see that everything is there. The doc strings are there. All the, all the, all the, all the surrounding code is done in a way that you feel, um, obligated or you feel the need to not, uh, to keep it as, um, as nice and as clean as it is. So just because it was done in a nice way and in, and it was consistently done in, in a specific style guide and everything, you see that all the code done by totally completely different persons and at different times in the year and for some different years, but still all of them follow the same rules, which means that any person, um, which means that everyone to understand what are the ways that, um, Blon API wants to things to have so things ca, uh, are kept. Um, or, or so the, the quality, the standards quality is as high as it is. It's not like it started, uh, started really nice because he made it so nice, but then people just don't care and just, just, yeah, yeah, I throw my, my patch here and I just run for it because I don't have more time, but instead people do take the time. Um, and you've actually feel that you can actually point them, Hey, you forgot the documentation. Please do that because everyone else did that. So in a way for the next one that comes afterwards, the documentation is up to date. First test coverage and, um, and the functionality is there with the doc strings and the change locks and everything. Um, so, um, Blon API, um, so you can find, um, of course it's released on, uh, on Piper, the code, it's on GitHub. Um, I was wondering if to put it or not on the, on the slides, but, um, uh, like all the, all the API, um, methods that are available, but the documentation, it's really nice and it's already integrated into the docsplon.org. Um, so just, uh, go there. It's linked from Piper so you can easily go there. It's tested at every change on Travis and of course also, um, it checks the cover, uh, the coverage on Cobrill's IO so that everything it's, it's, um, uh, we cut, we can see that if there's any, um, uh, if it lowers or not. Um, again, best practices, best practices and more best practices. Um, future, of course, um, there's like, for example, there's one that I'm missing a lot lately that I'm usually always trying to get the portal URL. So if anyone feels like, um, getting the portal URL, why not, uh, that feels like a standard 20, 80% rule, um, run upgrades. Uh, I'm, I'm obsessed with running, making a greater steps for everything. So that's something that maybe that's just more for me than for anyone. And basically, um, PLON API now it's in version one point five. So it's already had already five minor releases after the, the fierce one. So it's basically rock solid stable. There's of course a few bucks here there, um, a new API proposals that are on PI on GitHub. But yes. Yeah. Right. That's code for the, the, the, the, the, the, the, the, the, the, the, the, the, the, the right, that's code review things. Um, so, so yeah. Um, if you ever want to start looking at any PLON package, please be that, um, PLON API, that would be, um, like you can see like all the best practices and then go to the dark corners of PLON. Um, but at least so that you get a bit of a piece of mind that there's some sane, uh, code inside PLON still. Um, so if anyone is up for sprinting, I will probably be doing something testing related, but I will be glad to, to help anyone on that. Um, and I'm done. Thanks for listening. And I think we have time for questions probably. Still models in, in on the, in on the art. Yes. Yeah like fully back and I'll let go because I mean I think 4 grading is already satisfied like, so, you know like you could really use a lot of abstraction, but now I'm looking I'm looking back at like, I need to modify some more phone behavior. Then I'll use phone API. Is there a plan to be back for that? So it uses that? That's like, you know this 2020 ongoing discussion about the future and so on. More or less in a smaller scale, also happened with phone API. Using a plan API in core means that plan API should not use core because then you have circular dependencies. So for me, yes, I would totally love to be able to use this Flake8 plan API and just replace everything because it will be so much readable. But then that means that either we get all the dependencies that plan API relies on and we put everything there. So it's like a massive amount of, well, we could create a module like don't look here and then put all the dependencies there. But that's probably not sustainable in the long run. But yeah, definitely, I mean, I see your point and I share your desire. The technical part, it's a bit more complex, but of course you can bring that up in communityplan.org anytime and see how the below file goes. It would be really, really, really nice. What's the current state of the issue check or a lot of issues open and waiting for requests? If you know from the top of your head. Pool requests open, I think that there's one or two and there's an issues, there's a handful of them, sure. Some are like, let's add this new thing, but nobody step up to do it. And there's a few bugs indeed. And there's the still ongoing discussion with user ID, username, and probably that will require a 2.0 version just to make sure that we don't break semantic versioning. Again, something that was introduced by Plon API, best practice, use semantic versioning. But not that activity, it's quite low. So like really people, I guess, that everyone is using it and being happy with it. And yeah, but there were ideas of adding way new modules and so on, but now maybe things are moving more to the REST API, I think. I mean, personally, this one that I was saying before, this get portal URL or portal get URL. And that's probably one of the few things that for myself I would like to have there, but I don't find myself wanting much, much more there so far, not sure if anyone that's using it if they do. But of course, just create an issue on GitHub, have Plon API, we can discuss, and add the functionality if needed. More questions? Yeah. Always try to consider through the web stuff. Plon API is actually really helpful. And when you are doing through the web stuff, you would like to benefit. Do you consider it as a security issue? Because if you consider that any through the web with whatever battles you find, whether it is gonna anyway apply the security and permission of the computer, do you think it is a security issue or not? To use Plon API through the web, you mean? Yeah. Well, there's definitely a lot of methods on it that are totally secure, like getting the portal route, for example, or checking permissions, things. Creating content, maybe that's, well, I mean, at the end, even if it's from restricted Python, you will get the security checks. So far, I'm not on the security team, so I cannot really say that, but I would say that maybe it is, or especially, or if you know what you are doing, probably yes, like really making it completely open. I'm not so sure, especially with the problem with the doc strings, that everything is public then because all of them do have a doc string. So that would be probably a security concern for our great security team. But definitely that could be, or at least even a subset of it could be quite listed. That's definitely, I have no idea how to whitelist things to be enabled through the web, but definitely there's material that could be and will make, of course, the code through the web in rapid or so, way more nicer, sure. Any more questions?
Gil will give us an overview of the API and the process used to build it. Plone was built on a multi-layered stack with its origins in 1998, which made for a messy amalgamation of components each with its own API. plone.api was created to hide all those details and present a nice, useful, easy and intuitive API on top of them.