Dataset Viewer
Auto-converted to Parquet Duplicate
chapter_number
int64
1
25
title
stringlengths
5
58
text
stringlengths
120
172k
source_file
stringlengths
14
14
1
Designing Data-Intensive Applications
Designing Data-Intensive Applications The Big Ideas Behind Reliable, Scalable, and Maintainable Systems Martin Kleppmann
text00000.html
2
Designing Data-Intensive Applications
Designing Data-Intensive Applications by Martin Kleppmann Copyright © 2017 Martin Kleppmann. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc. , 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles ( http://oreilly.com/safari ). For more information, contact our corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com . Editors: Ann Spencer and Marie Beaugureau Indexer: Ellen Troutman-Zaig Production Editor: Kristen Brown Interior Designer: David Futato Copyeditor: Rachel Head Cover Designer: Karen Montgomery Proofreader: Amanda Kersey Illustrator: Rebecca Demarest March 2017: First Edition Revision History for the First Edition 2017-03-01: First Release See http://oreilly.com/catalog/errata.csp?isbn=9781449373320 for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Designing Data-Intensive Applications , the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. While the publisher and the author have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the author disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights. 978-1-449-37332-0 [LSI]
text00001.html
3
Dedication
Dedication Technology is a powerful force in our society. Data, software, and communication can be used for bad: to entrench unfair power structures, to undermine human rights, and to protect vested interests. But they can also be used for good: to make underrepresented people’s voices heard, to create opportunities for everyone, and to avert disasters. This book is dedicated to everyone working toward the good.
text00002.html
4
Chapter 4
Computing is pop culture. […] Pop culture holds a disdain for history. Pop culture is all about identity and feeling like you’re participating. It has nothing to do with cooperation, the past or the future — it’s living in the present. I think the same is true of most people who write code for money. They have no idea where [their culture came from]. Alan Kay , in interview with Dr Dobb’s Journal (2012)
text00003.html
5
Preface
Preface If you have worked in software engineering in recent years, especially in server-side and backend systems, you have probably been bombarded with a plethora of buzzwords relating to storage and processing of data. NoSQL! Big Data! Web-scale! Sharding! Eventual consistency! ACID! CAP theorem! Cloud services! MapReduce! Real-time! In the last decade we have seen many interesting developments in databases, in distributed systems, and in the ways we build applications on top of them. There are various driving forces for these developments: Internet companies such as Google, Yahoo!, Amazon, Facebook, LinkedIn, Microsoft, and Twitter are handling huge volumes of data and traffic, forcing them to create new tools that enable them to efficiently handle such scale. Businesses need to be agile, test hypotheses cheaply, and respond quickly to new market insights by keeping development cycles short and data models flexible. Free and open source software has become very successful and is now preferred to commercial or bespoke in-house software in many environments. CPU clock speeds are barely increasing, but multi-core processors are standard, and networks are getting faster. This means parallelism is only going to increase. Even if you work on a small team, you can now build systems that are distributed across many machines and even multiple geographic regions, thanks to infrastructure as a service (IaaS) such as Amazon Web Services. Many services are now expected to be highly available; extended downtime due to outages or maintenance is becoming increasingly unacceptable. Data-intensive applications are pushing the boundaries of what is possible by making use of these technological developments. We call an application data-intensive if data is its primary challenge — the quantity of data, the complexity of data, or the speed at which it is changing — as opposed to compute-intensive , where CPU cycles are the bottleneck. The tools and technologies that help data-intensive applications store and process data have been rapidly adapting to these changes. New types of database systems (“NoSQL”) have been getting lots of attention, but message queues, caches, search indexes, frameworks for batch and stream processing, and related technologies are very important too. Many applications use some combination of these. The buzzwords that fill this space are a sign of enthusiasm for the new possibilities, which is a great thing. However, as software engineers and architects, we also need to have a technically accurate and precise understanding of the various technologies and their trade-offs if we want to build good applications. For that understanding, we have to dig deeper than buzzwords. Fortunately, behind the rapid changes in technology, there are enduring principles that remain true, no matter which version of a particular tool you are using. If you understand those principles, you’re in a position to see where each tool fits in, how to make good use of it, and how to avoid its pitfalls. That’s where this book comes in. The goal of this book is to help you navigate the diverse and fast-changing landscape of technologies for processing and storing data. This book is not a tutorial for one particular tool, nor is it a textbook full of dry theory. Instead, we will look at examples of successful data systems: technologies that form the foundation of many popular applications and that have to meet scalability, performance, and reliability requirements in production every day. We will dig into the internals of those systems, tease apart their key algorithms, discuss their principles and the trade-offs they have to make. On this journey, we will try to find useful ways of thinking about data systems — not just how they work, but also why they work that way, and what questions we need to ask. After reading this book, you will be in a great position to decide which kind of technology is appropriate for which purpose, and understand how tools can be combined to form the foundation of a good application architecture. You won’t be ready to build your own database storage engine from scratch, but fortunately that is rarely necessary. You will, however, develop a good intuition for what your systems are doing under the hood so that you can reason about their behavior, make good design decisions, and track down any problems that may arise. Who Should Read This Book? If you develop applications that have some kind of server/backend for storing or processing data, and your applications use the internet (e.g., web applications, mobile apps, or internet-connected sensors), then this book is for you. This book is for software engineers, software architects, and technical managers who love to code. It is especially relevant if you need to make decisions about the architecture of the systems you work on — for example, if you need to choose tools for solving a given problem and figure out how best to apply them. But even if you have no choice over your tools, this book will help you better understand their strengths and weaknesses. You should have some experience building web-based applications or network services, and you should be familiar with relational databases and SQL. Any non-relational databases and other data-related tools you know are a bonus, but not required. A general understanding of common network protocols like TCP and HTTP is helpful. Your choice of programming language or framework makes no difference for this book. If any of the following are true for you, you’ll find this book valuable: You want to learn how to make data systems scalable, for example, to support web or mobile apps with millions of users. You need to make applications highly available (minimizing downtime) and operationally robust. You are looking for ways of making systems easier to maintain in the long run, even as they grow and as requirements and technologies change. You have a natural curiosity for the way things work and want to know what goes on inside major websites and online services. This book breaks down the internals of various databases and data processing systems, and it’s great fun to explore the bright thinking that went into their design. Sometimes, when discussing scalable data systems, people make comments along the lines of, “You’re not Google or Amazon. Stop worrying about scale and just use a relational database.” There is truth in that statement: building for scale that you don’t need is wasted effort and may lock you into an inflexible design. In effect, it is a form of premature optimization. However, it’s also important to choose the right tool for the job, and different technologies each have their own strengths and weaknesses. As we shall see, relational databases are important but not the final word on dealing with data. Scope of This Book This book does not attempt to give detailed instructions on how to install or use specific software packages or APIs, since there is already plenty of documentation for those things. Instead we discuss the various principles and trade-offs that are fundamental to data systems, and we explore the different design decisions taken by different products. In the ebook editions we have included links to the full text of online resources. All links were verified at the time of publication, but unfortunately links tend to break frequently due to the nature of the web. If you come across a broken link, or if you are reading a print copy of this book, you can look up references using a search engine. For academic papers, you can search for the title in Google Scholar to find open-access PDF files. Alternatively, you can find all of the references at https://github.com/ept/ddia-references , where we maintain up-to-date links. We look primarily at the architecture of data systems and the ways they are integrated into data-intensive applications. This book doesn’t have space to cover deployment, operations, security, management, and other areas — those are complex and important topics, and we wouldn’t do them justice by making them superficial side notes in this book. They deserve books of their own. Many of the technologies described in this book fall within the realm of the Big Data buzzword. However, the term “Big Data” is so overused and underdefined that it is not useful in a serious engineering discussion. This book uses less ambiguous terms, such as single-node versus distributed systems, or online/interactive versus offline/batch processing systems. This book has a bias toward free and open source software (FOSS), because reading, modifying, and executing source code is a great way to understand how something works in detail. Open platforms also reduce the risk of vendor lock-in. However, where appropriate, we also discuss proprietary software (closed-source software, software as a service, or companies’ in-house software that is only described in literature but not released publicly). Outline of This Book This book is arranged into three parts: In Part I , we discuss the fundamental ideas that underpin the design of data-intensive applications. We start in Chapter 1 by discussing what we’re actually trying to achieve: reliability, scalability, and maintainability; how we need to think about them; and how we can achieve them. In Chapter 2 we compare several different data models and query languages, and see how they are appropriate to different situations. In Chapter 3 we talk about storage engines: how databases arrange data on disk so that we can find it again efficiently. Chapter 4 turns to formats for data encoding (serialization) and evolution of schemas over time. In Part II , we move from data stored on one machine to data that is distributed across multiple machines. This is often necessary for scalability, but brings with it a variety of unique challenges. We first discuss replication ( Chapter 5 ), partitioning/sharding ( Chapter 6 ), and transactions ( Chapter 7 ). We then go into more detail on the problems with distributed systems ( Chapter 8 ) and what it means to achieve consistency and consensus in a distributed system ( Chapter 9 ). In Part III , we discuss systems that derive some datasets from other datasets. Derived data often occurs in heterogeneous systems: when there is no one database that can do everything well, applications need to integrate several different databases, caches, indexes, and so on. In Chapter 10 we start with a batch processing approach to derived data, and we build upon it with stream processing in Chapter 11 . Finally, in Chapter 12 we put everything together and discuss approaches for building reliable, scalable, and maintainable applications in the future. References and Further Reading Most of what we discuss in this book has already been said elsewhere in some form or another — in conference presentations, research papers, blog posts, code, bug trackers, mailing lists, and engineering folklore. This book summarizes the most important ideas from many different sources, and it includes pointers to the original literature throughout the text. The references at the end of each chapter are a great resource if you want to explore an area in more depth, and most of them are freely available online. O’Reilly Safari Note Safari (formerly Safari Books Online) is a membership-based training and reference platform for enterprise, government, educators, and individuals. Members have access to thousands of books, training videos, Learning Paths, interactive tutorials, and curated playlists from over 250 publishers, including O’Reilly Media, Harvard Business Review, Prentice Hall Professional, Addison-Wesley Professional, Microsoft Press, Sams, Que, Peachpit Press, Adobe, Focal Press, Cisco Press, John Wiley & Sons, Syngress, Morgan Kaufmann, IBM Redbooks, Packt, Adobe Press, FT Press, Apress, Manning, New Riders, McGraw-Hill, Jones & Bartlett, and Course Technology, among others. For more information, please visit http://oreilly.com/safari . How to Contact Us Please address comments and questions concerning this book to the publisher: O’Reilly Media, Inc. 1005 Gravenstein Highway North Sebastopol, CA 95472 800-998-9938 (in the United States or Canada) 707-829-0515 (international or local) 707-829-0104 (fax) We have a web page for this book, where we list errata, examples, and any additional information. You can access this page at http://bit.ly/designing-data-intensive-apps . To comment or ask technical questions about this book, send email to bookquestions@oreilly.com . For more information about our books, courses, conferences, and news, see our website at http://www.oreilly.com . Find us on Facebook: http://facebook.com/oreilly Follow us on Twitter: http://twitter.com/oreillymedia Watch us on YouTube: http://www.youtube.com/oreillymedia Acknowledgments This book is an amalgamation and systematization of a large number of other people’s ideas and knowledge, combining experience from both academic research and industrial practice. In computing we tend to be attracted to things that are new and shiny, but I think we have a huge amount to learn from things that have been done before. This book has over 800 references to articles, blog posts, talks, documentation, and more, and they have been an invaluable learning resource for me. I am very grateful to the authors of this material for sharing their knowledge. I have also learned a lot from personal conversations, thanks to a large number of people who have taken the time to discuss ideas or patiently explain things to me. In particular, I would like to thank Joe Adler, Ross Anderson, Peter Bailis, Márton Balassi, Alastair Beresford, Mark Callaghan, Mat Clayton, Patrick Collison, Sean Cribbs, Shirshanka Das, Niklas Ekström, Stephan Ewen, Alan Fekete, Gyula Fóra, Camille Fournier, Andres Freund, John Garbutt, Seth Gilbert, Tom Haggett, Pat Helland, Joe Hellerstein, Jakob Homan, Heidi Howard, John Hugg, Julian Hyde, Conrad Irwin, Evan Jones, Flavio Junqueira, Jessica Kerr, Kyle Kingsbury, Jay Kreps, Carl Lerche, Nicolas Liochon, Steve Loughran, Lee Mallabone, Nathan Marz, Caitie McCaffrey, Josie McLellan, Christopher Meiklejohn, Ian Meyers, Neha Narkhede, Neha Narula, Cathy O’Neil, Onora O’Neill, Ludovic Orban, Zoran Perkov, Julia Powles, Chris Riccomini, Henry Robinson, David Rosenthal, Jennifer Rullmann, Matthew Sackman, Martin Scholl, Amit Sela, Gwen Shapira, Greg Spurrier, Sam Stokes, Ben Stopford, Tom Stuart, Diana Vasile, Rahul Vohra, Pete Warden, and Brett Wooldridge. Several more people have been invaluable to the writing of this book by reviewing drafts and providing feedback. For these contributions I am particularly indebted to Raul Agepati, Tyler Akidau, Mattias Andersson, Sasha Baranov, Veena Basavaraj, David Beyer, Jim Brikman, Paul Carey, Raul Castro Fernandez, Joseph Chow, Derek Elkins, Sam Elliott, Alexander Gallego, Mark Grover, Stu Halloway, Heidi Howard, Nicola Kleppmann, Stefan Kruppa, Bjorn Madsen, Sander Mak, Stefan Podkowinski, Phil Potter, Hamid Ramazani, Sam Stokes, and Ben Summers. Of course, I take all responsibility for any remaining errors or unpalatable opinions in this book. For helping this book become real, and for their patience with my slow writing and unusual requests, I am grateful to my editors Marie Beaugureau, Mike Loukides, Ann Spencer, and all the team at O’Reilly. For helping find the right words, I thank Rachel Head. For giving me the time and freedom to write in spite of other work commitments, I thank Alastair Beresford, Susan Goodhue, Neha Narkhede, and Kevin Scott. Very special thanks are due to Shabbir Diwan and Edie Freedman, who illustrated with great care the maps that accompany the chapters. It’s wonderful that they took on the unconventional idea of creating maps, and made them so beautiful and compelling. Finally, my love goes to my family and friends, without whom I would not have been able to get through this writing process that has taken almost four years. You’re the best.
text00004.html
6
Part I.Foundations of Data Systems
Part I. Foundations of Data Systems The first four chapters go through the fundamental ideas that apply to all data systems, whether running on a single machine or distributed across a cluster of machines: Chapter 1 introduces the terminology and approach that we’re going to use throughout this book. It examines what we actually mean by words like reliability , scalability , and maintainability , and how we can try to achieve these goals. Chapter 2 compares several different data models and query languages — the most visible distinguishing factor between databases from a developer’s point of view. We will see how different models are appropriate to different situations. Chapter 3 turns to the internals of storage engines and looks at how databases lay out data on disk. Different storage engines are optimized for different workloads, and choosing the right one can have a huge effect on performance. Chapter 4 compares various formats for data encoding (serialization) and especially examines how they fare in an environment where application requirements change and schemas need to adapt over time. Later, Part II will turn to the particular issues of distributed data systems.
text00005.html
7
Chapter 1.Reliable, Scalable, andMaintainable Applications
Chapter 1. Reliable, Scalable, and Maintainable Applications The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? Alan Kay , in interview with Dr Dobb’s Journal (2012) Many applications today are data-intensive , as opposed to compute-intensive . Raw CPU power is rarely a limiting factor for these applications — bigger problems are usually the amount of data, the complexity of data, and the speed at which it is changing. A data-intensive application is typically built from standard building blocks that provide commonly needed functionality. For example, many applications need to: Store data so that they, or another application, can find it again later ( databases ) Remember the result of an expensive operation, to speed up reads ( caches ) Allow users to search data by keyword or filter it in various ways ( search indexes ) Send a message to another process, to be handled asynchronously ( stream processing ) Periodically crunch a large amount of accumulated data ( batch processing ) If that sounds painfully obvious, that’s just because these data systems are such a successful abstraction: we use them all the time without thinking too much. When building an application, most engineers wouldn’t dream of writing a new data storage engine from scratch, because databases are a perfectly good tool for the job. But reality is not that simple. There are many database systems with different characteristics, because different applications have different requirements. There are various approaches to caching, several ways of building search indexes, and so on. When building an application, we still need to figure out which tools and which approaches are the most appropriate for the task at hand. And it can be hard to combine tools when you need to do something that a single tool cannot do alone. This book is a journey through both the principles and the practicalities of data systems, and how you can use them to build data-intensive applications. We will explore what different tools have in common, what distinguishes them, and how they achieve their characteristics. In this chapter, we will start by exploring the fundamentals of what we are trying to achieve: reliable, scalable, and maintainable data systems. We’ll clarify what those things mean, outline some ways of thinking about them, and go over the basics that we will need for later chapters. In the following chapters we will continue layer by layer, looking at different design decisions that need to be considered when working on a data-intensive application. Thinking About Data Systems We typically think of databases, queues, caches, etc. as being very different categories of tools. Although a database and a message queue have some superficial similarity — both store data for some time — they have very different access patterns, which means different performance characteristics, and thus very different implementations. So why should we lump them all together under an umbrella term like data systems ? Many new tools for data storage and processing have emerged in recent years. They are optimized for a variety of different use cases, and they no longer neatly fit into traditional categories [ 1 ]. For example, there are datastores that are also used as message queues (Redis), and there are message queues with database-like durability guarantees (Apache Kafka). The boundaries between the categories are becoming blurred. Secondly, increasingly many applications now have such demanding or wide-ranging requirements that a single tool can no longer meet all of its data processing and storage needs. Instead, the work is broken down into tasks that can be performed efficiently on a single tool, and those different tools are stitched together using application code. For example, if you have an application-managed caching layer (using Memcached or similar), or a full-text search server (such as Elasticsearch or Solr) separate from your main database, it is normally the application code’s responsibility to keep those caches and indexes in sync with the main database. Figure 1-1 gives a glimpse of what this may look like (we will go into detail in later chapters). Figure 1-1. One possible architecture for a data system that combines several components . When you combine several tools in order to provide a service, the service’s interface or application programming interface (API) usually hides those implementation details from clients. Now you have essentially created a new, special-purpose data system from smaller, general-purpose components. Your composite data system may provide certain guarantees: e.g., that the cache will be correctly invalidated or updated on writes so that outside clients see consistent results. You are now not only an application developer, but also a data system designer. If you are designing a data system or service, a lot of tricky questions arise. How do you ensure that the data remains correct and complete, even when things go wrong internally? How do you provide consistently good performance to clients, even when parts of your system are degraded? How do you scale to handle an increase in load? What does a good API for the service look like? There are many factors that may influence the design of a data system, including the skills and experience of the people involved, legacy system dependencies, the timescale for delivery, your organization’s tolerance of different kinds of risk, regulatory constraints, etc. Those factors depend very much on the situation. In this book, we focus on three concerns that are important in most software systems: Reliability The system should continue to work correctly (performing the correct function at the desired level of performance) even in the face of adversity (hardware or software faults, and even human error). See “Reliability” . Scalability As the system grows (in data volume, traffic volume, or complexity), there should be reasonable ways of dealing with that growth. See “Scalability” . Maintainability Over time, many different people will work on the system (engineering and operations, both maintaining current behavior and adapting the system to new use cases), and they should all be able to work on it productively . See “Maintainability” . These words are often cast around without a clear understanding of what they mean. In the interest of thoughtful engineering, we will spend the rest of this chapter exploring ways of thinking about reliability, scalability, and maintainability. Then, in the following chapters, we will look at various techniques, architectures, and algorithms that are used in order to achieve those goals. Reliability Everybody has an intuitive idea of what it means for something to be reliable or unreliable. For software, typical expectations include: The application performs the function that the user expected. It can tolerate the user making mistakes or using the software in unexpected ways. Its performance is good enough for the required use case, under the expected load and data volume. The system prevents any unauthorized access and abuse. If all those things together mean “working correctly,” then we can understand reliability as meaning, roughly, “continuing to work correctly, even when things go wrong.” The things that can go wrong are called faults , and systems that anticipate faults and can cope with them are called fault-tolerant or resilient . The former term is slightly misleading: it suggests that we could make a system tolerant of every possible kind of fault, which in reality is not feasible. If the entire planet Earth (and all servers on it) were swallowed by a black hole, tolerance of that fault would require web hosting in space — good luck getting that budget item approved. So it only makes sense to talk about tolerating certain types of faults. Note that a fault is not the same as a failure [ 2 ]. A fault is usually defined as one component of the system deviating from its spec, whereas a failure is when the system as a whole stops providing the required service to the user. It is impossible to reduce the probability of a fault to zero; therefore it is usually best to design fault-tolerance mechanisms that prevent faults from causing failures. In this book we cover several techniques for building reliable systems from unreliable parts. Counterintuitively, in such fault-tolerant systems, it can make sense to increase the rate of faults by triggering them deliberately — for example, by randomly killing individual processes without warning. Many critical bugs are actually due to poor error handling [ 3 ]; by deliberately inducing faults, you ensure that the fault-tolerance machinery is continually exercised and tested, which can increase your confidence that faults will be handled correctly when they occur naturally. The Netflix Chaos Monkey [ 4 ] is an example of this approach. Although we generally prefer tolerating faults over preventing faults, there are cases where prevention is better than cure (e.g., because no cure exists). This is the case with security matters, for example: if an attacker has compromised a system and gained access to sensitive data, that event cannot be undone. However, this book mostly deals with the kinds of faults that can be cured, as described in the following sections. Hardware Faults When we think of causes of system failure, hardware faults quickly come to mind. Hard disks crash, RAM becomes faulty, the power grid has a blackout, someone unplugs the wrong network cable. Anyone who has worked with large datacenters can tell you that these things happen all the time when you have a lot of machines. Hard disks are reported as having a mean time to failure (MTTF) of about 10 to 50 years [ 5 , 6 ]. Thus, on a storage cluster with 10,000 disks, we should expect on average one disk to die per day. Our first response is usually to add redundancy to the individual hardware components in order to reduce the failure rate of the system. Disks may be set up in a RAID configuration, servers may have dual power supplies and hot-swappable CPUs, and datacenters may have batteries and diesel generators for backup power. When one component dies, the redundant component can take its place while the broken component is replaced. This approach cannot completely prevent hardware problems from causing failures, but it is well understood and can often keep a machine running uninterrupted for years. Until recently, redundancy of hardware components was sufficient for most applications, since it makes total failure of a single machine fairly rare. As long as you can restore a backup onto a new machine fairly quickly, the downtime in case of failure is not catastrophic in most applications. Thus, multi-machine redundancy was only required by a small number of applications for which high availability was absolutely essential. However, as data volumes and applications’ computing demands have increased, more applications have begun using larger numbers of machines, which proportionally increases the rate of hardware faults. Moreover, in some cloud platforms such as Amazon Web Services (AWS) it is fairly common for virtual machine instances to become unavailable without warning [ 7 ], as the platforms are designed to prioritize flexibility and elasticity i over single-machine reliability. Hence there is a move toward systems that can tolerate the loss of entire machines, by using software fault-tolerance techniques in preference or in addition to hardware redundancy. Such systems also have operational advantages: a single-server system requires planned downtime if you need to reboot the machine (to apply operating system security patches, for example), whereas a system that can tolerate machine failure can be patched one node at a time, without downtime of the entire system (a rolling upgrade ; see Chapter 4 ). Software Errors We usually think of hardware faults as being random and independent from each other: one machine’s disk failing does not imply that another machine’s disk is going to fail. There may be weak correlations (for example due to a common cause, such as the temperature in the server rack), but otherwise it is unlikely that a large number of hardware components will fail at the same time. Another class of fault is a systematic error within the system [ 8 ]. Such faults are harder to anticipate, and because they are correlated across nodes, they tend to cause many more system failures than uncorrelated hardware faults [ 5 ]. Examples include: A software bug that causes every instance of an application server to crash when given a particular bad input. For example, consider the leap second on June 30, 2012, that caused many applications to hang simultaneously due to a bug in the Linux kernel [ 9 ]. A runaway process that uses up some shared resource — CPU time, memory, disk space, or network bandwidth. A service that the system depends on that slows down, becomes unresponsive, or starts returning corrupted responses. Cascading failures, where a small fault in one component triggers a fault in another component, which in turn triggers further faults [ 10 ]. The bugs that cause these kinds of software faults often lie dormant for a long time until they are triggered by an unusual set of circumstances. In those circumstances, it is revealed that the software is making some kind of assumption about its environment — and while that assumption is usually true, it eventually stops being true for some reason [ 11 ]. There is no quick solution to the problem of systematic faults in software. Lots of small things can help: carefully thinking about assumptions and interactions in the system; thorough testing; process isolation; allowing processes to crash and restart; measuring, monitoring, and analyzing system behavior in production. If a system is expected to provide some guarantee (for example, in a message queue, that the number of incoming messages equals the number of outgoing messages), it can constantly check itself while it is running and raise an alert if a discrepancy is found [ 12 ]. Human Errors Humans design and build software systems, and the operators who keep the systems running are also human. Even when they have the best intentions, humans are known to be unreliable. For example, one study of large internet services found that configuration errors by operators were the leading cause of outages, whereas hardware faults (servers or network) played a role in only 10–25% of outages [ 13 ]. How do we make our systems reliable, in spite of unreliable humans? The best systems combine several approaches: Design systems in a way that minimizes opportunities for error. For example, well-designed abstractions, APIs, and admin interfaces make it easy to do “the right thing” and discourage “the wrong thing.” However, if the interfaces are too restrictive people will work around them, negating their benefit, so this is a tricky balance to get right. Decouple the places where people make the most mistakes from the places where they can cause failures. In particular, provide fully featured non-production sandbox environments where people can explore and experiment safely, using real data, without affecting real users. Test thoroughly at all levels, from unit tests to whole-system integration tests and manual tests [ 3 ]. Automated testing is widely used, well understood, and especially valuable for covering corner cases that rarely arise in normal operation. Allow quick and easy recovery from human errors, to minimize the impact in the case of a failure. For example, make it fast to roll back configuration changes, roll out new code gradually (so that any unexpected bugs affect only a small subset of users), and provide tools to recompute data (in case it turns out that the old computation was incorrect). Set up detailed and clear monitoring, such as performance metrics and error rates. In other engineering disciplines this is referred to as telemetry . (Once a rocket has left the ground, telemetry is essential for tracking what is happening, and for understanding failures [ 14 ].) Monitoring can show us early warning signals and allow us to check whether any assumptions or constraints are being violated. When a problem occurs, metrics can be invaluable in diagnosing the issue. Implement good management practices and training — a complex and important aspect, and beyond the scope of this book. How Important Is Reliability? Reliability is not just for nuclear power stations and air traffic control software — more mundane applications are also expected to work reliably. Bugs in business applications cause lost productivity (and legal risks if figures are reported incorrectly), and outages of ecommerce sites can have huge costs in terms of lost revenue and damage to reputation. Even in “noncritical” applications we have a responsibility to our users. Consider a parent who stores all their pictures and videos of their children in your photo application [ 15 ]. How would they feel if that database was suddenly corrupted? Would they know how to restore it from a backup? There are situations in which we may choose to sacrifice reliability in order to reduce development cost (e.g., when developing a prototype product for an unproven market) or operational cost (e.g., for a service with a very narrow profit margin) — but we should be very conscious of when we are cutting corners. Scalability Even if a system is working reliably today, that doesn’t mean it will necessarily work reliably in the future. One common reason for degradation is increased load: perhaps the system has grown from 10,000 concurrent users to 100,000 concurrent users, or from 1 million to 10 million. Perhaps it is processing much larger volumes of data than it did before. Scalability is the term we use to describe a system’s ability to cope with increased load. Note, however, that it is not a one-dimensional label that we can attach to a system: it is meaningless to say “X is scalable” or “Y doesn’t scale.” Rather, discussing scalability means considering questions like “If the system grows in a particular way, what are our options for coping with the growth?” and “How can we add computing resources to handle the additional load?” Describing Load First, we need to succinctly describe the current load on the system; only then can we discuss growth questions (what happens if our load doubles?). Load can be described with a few numbers which we call load parameters . The best choice of parameters depends on the architecture of your system: it may be requests per second to a web server, the ratio of reads to writes in a database, the number of simultaneously active users in a chat room, the hit rate on a cache, or something else. Perhaps the average case is what matters for you, or perhaps your bottleneck is dominated by a small number of extreme cases. To make this idea more concrete, let’s consider Twitter as an example, using data published in November 2012 [ 16 ]. Two of Twitter’s main operations are: Post tweet A user can publish a new message to their followers (4.6k requests/sec on average, over 12k requests/sec at peak). Home timeline A user can view tweets posted by the people they follow (300k requests/sec). Simply handling 12,000 writes per second (the peak rate for posting tweets) would be fairly easy. However, Twitter’s scaling challenge is not primarily due to tweet volume, but due to fan-out ii — each user follows many people, and each user is followed by many people. There are broadly two ways of implementing these two operations: Posting a tweet simply inserts the new tweet into a global collection of tweets. When a user requests their home timeline, look up all the people they follow, find all the tweets for each of those users, and merge them (sorted by time). In a relational database like in Figure 1-2 , you could write a query such as: SELECT tweets . * , users . * FROM tweets JOIN users ON tweets . sender_id = users . id JOIN follows ON follows . followee_id = users . id WHERE follows . follower_id = current_user Maintain a cache for each user’s home timeline — like a mailbox of tweets for each recipient user (see Figure 1-3 ). When a user posts a tweet , look up all the people who follow that user, and insert the new tweet into each of their home timeline caches. The request to read the home timeline is then cheap, because its result has been computed ahead of time. Figure 1-2. Simple relational schema for implementing a Twitter home timeline. Figure 1-3. Twitter’s data pipeline for delivering tweets to followers, with load parameters as of November 2012 [ 16 ]. The first version of Twitter used approach 1, but the systems struggled to keep up with the load of home timeline queries, so the company switched to approach 2. This works better because the average rate of published tweets is almost two orders of magnitude lower than the rate of home timeline reads, and so in this case it’s preferable to do more work at write time and less at read time. However, the downside of approach 2 is that posting a tweet now requires a lot of extra work. On average, a tweet is delivered to about 75 followers, so 4.6k tweets per second become 345k writes per second to the home timeline caches. But this average hides the fact that the number of followers per user varies wildly, and some users have over 30 million followers. This means that a single tweet may result in over 30 million writes to home timelines! Doing this in a timely manner — Twitter tries to deliver tweets to followers within five seconds — is a significant challenge. In the example of Twitter, the distribution of followers per user (maybe weighted by how often those users tweet) is a key load parameter for discussing scalability, since it determines the fan-out load. Your application may have very different characteristics, but you can apply similar principles to reasoning about its load. The final twist of the Twitter anecdote: now that approach 2 is robustly implemented, Twitter is moving to a hybrid of both approaches. Most users’ tweets continue to be fanned out to home timelines at the time when they are posted, but a small number of users with a very large number of followers (i.e., celebrities) are excepted from this fan-out. Tweets from any celebrities that a user may follow are fetched separately and merged with that user’s home timeline when it is read, like in approach 1. This hybrid approach is able to deliver consistently good performance. We will revisit this example in Chapter 12 after we have covered some more technical ground. Describing Performance Once you have described the load on your system, you can investigate what happens when the load increases. You can look at it in two ways: When you increase a load parameter and keep the system resources (CPU, memory, network bandwidth, etc.) unchanged, how is the performance of your system affected? When you increase a load parameter, how much do you need to increase the resources if you want to keep performance unchanged? Both questions require performance numbers, so let’s look briefly at describing the performance of a system. In a batch processing system such as Hadoop, we usually care about throughput — the number of records we can process per second, or the total time it takes to run a job on a dataset of a certain size. iii In online systems, what’s usually more important is the service’s response time — that is, the time between a client sending a request and receiving a response. Latency and response time Latency and response time are often used synonymously, but they are not the same. The response time is what the client sees: besides the actual time to process the request (the service time ), it includes network delays and queueing delays. Latency is the duration that a request is waiting to be handled — during which it is latent , awaiting service [ 17 ]. Even if you only make the same request over and over again, you’ll get a slightly different response time on every try. In practice, in a system handling a variety of requests, the response time can vary a lot. We therefore need to think of response time not as a single number, but as a distribution of values that you can measure. In Figure 1-4 , each gray bar represents a request to a service, and its height shows how long that request took. Most requests are reasonably fast, but there are occasional outliers that take much longer. Perhaps the slow requests are intrinsically more expensive, e.g., because they process more data. But even in a scenario where you’d think all requests should take the same time, you get variation: random additional latency could be introduced by a context switch to a background process, the loss of a network packet and TCP retransmission, a garbage collection pause, a page fault forcing a read from disk, mechanical vibrations in the server rack [ 18 ], or many other causes. Figure 1-4. Illustrating mean and percentiles: response times for a sample of 100 requests to a service. It’s common to see the average response time of a service reported. (Strictly speaking, the term “average” doesn’t refer to any particular formula, but in practice it is usually understood as the arithmetic mean : given n values, add up all the values, and divide by n .) However, the mean is not a very good metric if you want to know your “typical” response time, because it doesn’t tell you how many users actually experienced that delay. Usually it is better to use percentiles . If you take your list of response times and sort it from fastest to slowest, then the median is the halfway point: for example, if your median response time is 200 ms, that means half your requests return in less than 200 ms, and half your requests take longer than that. This makes the median a good metric if you want to know how long users typically have to wait: half of user requests are served in less than the median response time, and the other half take longer than the median. The median is also known as the 50th percentile , and sometimes abbreviated as p50 . Note that the median refers to a single request; if the user makes several requests (over the course of a session, or because several resources are included in a single page), the probability that at least one of them is slower than the median is much greater than 50%. In order to figure out how bad your outliers are, you can look at higher percentiles: the 95th , 99th , and 99.9th percentiles are common (abbreviated p95 , p99 , and p999 ). They are the response time thresholds at which 95%, 99%, or 99.9% of requests are faster than that particular threshold. For example, if the 95th percentile response time is 1.5 seconds, that means 95 out of 100 requests take less than 1.5 seconds, and 5 out of 100 requests take 1.5 seconds or more. This is illustrated in Figure 1-4 . High percentiles of response times, also known as tail latencies , are important because they directly affect users’ experience of the service. For example, Amazon describes response time requirements for internal services in terms of the 99.9th percentile, even though it only affects 1 in 1,000 requests. This is because the customers with the slowest requests are often those who have the most data on their accounts because they have made many purchases — that is, they’re the most valuable customers [ 19 ]. It’s important to keep those customers happy by ensuring the website is fast for them: Amazon has also observed that a 100 ms increase in response time reduces sales by 1% [ 20 ], and others report that a 1-second slowdown reduces a customer satisfaction metric by 16% [ 21 , 22 ]. On the other hand, optimizing the 99.99th percentile (the slowest 1 in 10,000 requests) was deemed too expensive and to not yield enough benefit for Amazon’s purposes. Reducing response times at very high percentiles is difficult because they are easily affected by random events outside of your control, and the benefits are diminishing. For example, percentiles are often used in service level objectives (SLOs) and service level agreements (SLAs), contracts that define the expected performance and availability of a service. An SLA may state that the service is considered to be up if it has a median response time of less than 200 ms and a 99th percentile under 1 s (if the response time is longer, it might as well be down), and the service may be required to be up at least 99.9% of the time. These metrics set expectations for clients of the service and allow customers to demand a refund if the SLA is not met. Queueing delays often account for a large part of the response time at high percentiles. As a server can only process a small number of things in parallel (limited, for example, by its number of CPU cores), it only takes a small number of slow requests to hold up the processing of subsequent requests — an effect sometimes known as head-of-line blocking . Even if those subsequent requests are fast to process on the server, the client will see a slow overall response time due to the time waiting for the prior request to complete. Due to this effect, it is important to measure response times on the client side. When generating load artificially in order to test the scalability of a system, the load-generating client needs to keep sending requests independently of the response time. If the client waits for the previous request to complete before sending the next one, that behavior has the effect of artificially keeping the queues shorter in the test than they would be in reality, which skews the measurements [ 23 ]. Percentiles in Practice High percentiles become especially important in backend services that are called multiple times as part of serving a single end-user request. Even if you make the calls in parallel, the end-user request still needs to wait for the slowest of the parallel calls to complete. It takes just one slow call to make the entire end-user request slow, as illustrated in Figure 1-5 . Even if only a small percentage of backend calls are slow, the chance of getting a slow call increases if an end-user request requires multiple backend calls, and so a higher proportion of end-user requests end up being slow (an effect known as tail latency amplification [ 24 ]). If you want to add response time percentiles to the monitoring dashboards for your services, you need to efficiently calculate them on an ongoing basis. For example, you may want to keep a rolling window of response times of requests in the last 10 minutes. Every minute, you calculate the median and various percentiles over the values in that window and plot those metrics on a graph. The naïve implementation is to keep a list of response times for all requests within the time window and to sort that list every minute. If that is too inefficient for you, there are algorithms that can calculate a good approximation of percentiles at minimal CPU and memory cost, such as forward decay [ 25 ], t-digest [ 26 ], or HdrHistogram [ 27 ]. Beware that averaging percentiles, e.g., to reduce the time resolution or to combine data from several machines, is mathematically meaningless — the right way of aggregating response time data is to add the histograms [ 28 ]. Figure 1-5. When several backend calls are needed to serve a request, it takes just a single slow backend request to slow down the entire end-user request. Approaches for Coping with Load Now that we have discussed the parameters for describing load and metrics for measuring performance, we can start discussing scalability in earnest: how do we maintain good performance even when our load parameters increase by some amount? An architecture that is appropriate for one level of load is unlikely to cope with 10 times that load. If you are working on a fast-growing service, it is therefore likely that you will need to rethink your architecture on every order of magnitude load increase — or perhaps even more often than that. People often talk of a dichotomy between scaling up ( vertical scaling , moving to a more powerful machine) and scaling out ( horizontal scaling , distributing the load across multiple smaller machines). Distributing load across multiple machines is also known as a shared-nothing architecture. A system that can run on a single machine is often simpler, but high-end machines can become very expensive, so very intensive workloads often can’t avoid scaling out. In reality, good architectures usually involve a pragmatic mixture of approaches: for example, using several fairly powerful machines can still be simpler and cheaper than a large number of small virtual machines. Some systems are elastic , meaning that they can automatically add computing resources when they detect a load increase, whereas other systems are scaled manually (a human analyzes the capacity and decides to add more machines to the system). An elastic system can be useful if load is highly unpredictable, but manually scaled systems are simpler and may have fewer operational surprises (see “Rebalancing Partitions” ). While distributing stateless services across multiple machines is fairly straightforward, taking stateful data systems from a single node to a distributed setup can introduce a lot of additional complexity. For this reason, common wisdom until recently was to keep your database on a single node (scale up) until scaling cost or high-availability requirements forced you to make it distributed. As the tools and abstractions for distributed systems get better, this common wisdom may change, at least for some kinds of applications. It is conceivable that distributed data systems will become the default in the future, even for use cases that don’t handle large volumes of data or traffic. Over the course of the rest of this book we will cover many kinds of distributed data systems, and discuss how they fare not just in terms of scalability, but also ease of use and maintainability. The architecture of systems that operate at large scale is usually highly specific to the application — there is no such thing as a generic, one-size-fits-all scalable architecture (informally known as magic scaling sauce ). The problem may be the volume of reads, the volume of writes, the volume of data to store, the complexity of the data, the response time requirements, the access patterns, or (usually) some mixture of all of these plus many more issues. For example, a system that is designed to handle 100,000 requests per second, each 1 kB in size, looks very different from a system that is designed for 3 requests per minute, each 2 GB in size — even though the two systems have the same data throughput. An architecture that scales well for a particular application is built around assumptions of which operations will be common and which will be rare — the load parameters. If those assumptions turn out to be wrong, the engineering effort for scaling is at best wasted, and at worst counterproductive. In an early-stage startup or an unproven product it’s usually more important to be able to iterate quickly on product features than it is to scale to some hypothetical future load. Even though they are specific to a particular application, scalable architectures are nevertheless usually built from general-purpose building blocks, arranged in familiar patterns. In this book we discuss those building blocks and patterns. Maintainability It is well known that the majority of the cost of software is not in its initial development, but in its ongoing maintenance — fixing bugs, keeping its systems operational, investigating failures, adapting it to new platforms, modifying it for new use cases, repaying technical debt, and adding new features. Yet, unfortunately, many people working on software systems dislike maintenance of so-called legacy systems — perhaps it involves fixing other people’s mistakes, or working with platforms that are now outdated, or systems that were forced to do things they were never intended for. Every legacy system is unpleasant in its own way, and so it is difficult to give general recommendations for dealing with them. However, we can and should design software in such a way that it will hopefully minimize pain during maintenance, and thus avoid creating legacy software ourselves. To this end, we will pay particular attention to three design principles for software systems: Operability Make it easy for operations teams to keep the system running smoothly. Simplicity Make it easy for new engineers to understand the system, by removing as much complexity as possible from the system. (Note this is not the same as simplicity of the user interface.) Evolvability Make it easy for engineers to make changes to the system in the future, adapting it for unanticipated use cases as requirements change. Also known as extensibility , modifiability , or plasticity . As previously with reliability and scalability, there are no easy solutions for achieving these goals. Rather, we will try to think about systems with operability, simplicity, and evolvability in mind. Operability: Making Life Easy for Operations It has been suggested that “good operations can often work around the limitations of bad (or incomplete) software, but good software cannot run reliably with bad operations” [ 12 ]. While some aspects of operations can and should be automated, it is still up to humans to set up that automation in the first place and to make sure it’s working correctly. Operations teams are vital to keeping a software system running smoothly. A good operations team typically is responsible for the following, and more [ 29 ]: Monitoring the health of the system and quickly restoring service if it goes into a bad state Tracking down the cause of problems, such as system failures or degraded performance Keeping software and platforms up to date, including security patches Keeping tabs on how different systems affect each other, so that a problematic change can be avoided before it causes damage Anticipating future problems and solving them before they occur (e.g., capacity planning) Establishing good practices and tools for deployment, configuration management, and more Performing complex maintenance tasks, such as moving an application from one platform to another Maintaining the security of the system as configuration changes are made Defining processes that make operations predictable and help keep the production environment stable Preserving the organization’s knowledge about the system, even as individual people come and go Good operability means making routine tasks easy, allowing the operations team to focus their efforts on high-value activities. Data systems can do various things to make routine tasks easy, including: Providing visibility into the runtime behavior and internals of the system, with good monitoring Providing good support for automation and integration with standard tools Avoiding dependency on individual machines (allowing machines to be taken down for maintenance while the system as a whole continues running uninterrupted) Providing good documentation and an easy-to-understand operational model (“If I do X, Y will happen”) Providing good default behavior, but also giving administrators the freedom to override defaults when needed Self-healing where appropriate, but also giving administrators manual control over the system state when needed Exhibiting predictable behavior, minimizing surprises Simplicity: Managing Complexity Small software projects can have delightfully simple and expressive code, but as projects get larger, they often become very complex and difficult to understand. This complexity slows down everyone who needs to work on the system, further increasing the cost of maintenance. A software project mired in complexity is sometimes described as a big ball of mud [ 30 ]. There are various possible symptoms of complexity: explosion of the state space, tight coupling of modules, tangled dependencies, inconsistent naming and terminology, hacks aimed at solving performance problems, special-casing to work around issues elsewhere, and many more. Much has been said on this topic already [ 31 , 32 , 33 ]. When complexity makes maintenance hard, budgets and schedules are often overrun. In complex software, there is also a greater risk of introducing bugs when making a change: when the system is harder for developers to understand and reason about, hidden assumptions, unintended consequences, and unexpected interactions are more easily overlooked. Conversely, reducing complexity greatly improves the maintainability of software, and thus simplicity should be a key goal for the systems we build. Making a system simpler does not necessarily mean reducing its functionality; it can also mean removing accidental complexity. Moseley and Marks [ 32 ] define complexity as accidental if it is not inherent in the problem that the software solves (as seen by the users) but arises only from the implementation. One of the best tools we have for removing accidental complexity is abstraction . A good abstraction can hide a great deal of implementation detail behind a clean, simple-to-understand façade. A good abstraction can also be used for a wide range of different applications. Not only is this reuse more efficient than reimplementing a similar thing multiple times, but it also leads to higher-quality software, as quality improvements in the abstracted component benefit all applications that use it. For example, high-level programming languages are abstractions that hide machine code, CPU registers, and syscalls. SQL is an abstraction that hides complex on-disk and in-memory data structures, concurrent requests from other clients, and inconsistencies after crashes. Of course, when programming in a high-level language, we are still using machine code; we are just not using it directly , because the programming language abstraction saves us from having to think about it. However, finding good abstractions is very hard. In the field of distributed systems, although there are many good algorithms, it is much less clear how we should be packaging them into abstractions that help us keep the complexity of the system at a manageable level. Throughout this book, we will keep our eyes open for good abstractions that allow us to extract parts of a large system into well-defined, reusable components. Evolvability: Making Change Easy It’s extremely unlikely that your system’s requirements will remain unchanged forever. They are much more likely to be in constant flux: you learn new facts, previously unanticipated use cases emerge, business priorities change, users request new features, new platforms replace old platforms, legal or regulatory requirements change, growth of the system forces architectural changes, etc. In terms of organizational processes, Agile working patterns provide a framework for adapting to change. The Agile community has also developed technical tools and patterns that are helpful when developing software in a frequently changing environment, such as test-driven development (TDD) and refactoring. Most discussions of these Agile techniques focus on a fairly small, local scale (a couple of source code files within the same application). In this book, we search for ways of increasing agility on the level of a larger data system, perhaps consisting of several different applications or services with different characteristics. For example, how would you “refactor” Twitter’s architecture for assembling home timelines ( “Describing Load” ) from approach 1 to approach 2? The ease with which you can modify a data system, and adapt it to changing requirements, is closely linked to its simplicity and its abstractions: simple and easy-to-understand systems are usually easier to modify than complex ones. But since this is such an important idea, we will use a different word to refer to agility on a data system level: evolvability [ 34 ]. Summary In this chapter, we have explored some fundamental ways of thinking about data-intensive applications. These principles will guide us through the rest of the book, where we dive into deep technical detail. An application has to meet various requirements in order to be useful. There are functional requirements (what it should do, such as allowing data to be stored, retrieved, searched, and processed in various ways), and some nonfunctional requirements (general properties like security, reliability, compliance, scalability, compatibility, and maintainability). In this chapter we discussed reliability, scalability, and maintainability in detail. Reliability means making systems work correctly, even when faults occur. Faults can be in hardware (typically random and uncorrelated), software (bugs are typically systematic and hard to deal with), and humans (who inevitably make mistakes from time to time). Fault-tolerance techniques can hide certain types of faults from the end user. Scalability means having strategies for keeping performance good, even when load increases. In order to discuss scalability, we first need ways of describing load and performance quantitatively. We briefly looked at Twitter’s home timelines as an example of describing load, and response time percentiles as a way of measuring performance. In a scalable system, you can add processing capacity in order to remain reliable under high load. Maintainability has many facets, but in essence it’s about making life better for the engineering and operations teams who need to work with the system. Good abstractions can help reduce complexity and make the system easier to modify and adapt for new use cases. Good operability means having good visibility into the system’s health, and having effective ways of managing it. There is unfortunately no easy fix for making applications reliable, scalable, or maintainable. However, there are certain patterns and techniques that keep reappearing in different kinds of applications. In the next few chapters we will take a look at some examples of data systems and analyze how they work toward those goals. Later in the book, in Part III , we will look at patterns for systems that consist of several components working together, such as the one in Figure 1-1 . Footnotes i Defined in “Approaches for Coping with Load” . ii A term borrowed from electronic engineering, where it describes the number of logic gate inputs that are attached to another gate’s output. The output needs to supply enough current to drive all the attached inputs. In transaction processing systems, we use it to describe the number of requests to other services that we need to make in order to serve one incoming request. iii In an ideal world, the running time of a batch job is the size of the dataset divided by the throughput. In practice, the running time is often longer, due to skew (data not being spread evenly across worker processes) and needing to wait for the slowest task to complete. References [ 1 ] Michael Stonebraker and Uğur Çetintemel: “ ‘One Size Fits All’: An Idea Whose Time Has Come and Gone ,” at 21st International Conference on Data Engineering (ICDE), April 2005. [ 2 ] Walter L. Heimerdinger and Charles B. Weinstock: “ A Conceptual Framework for System Fault Tolerance ,” Technical Report CMU/SEI-92-TR-033, Software Engineering Institute, Carnegie Mellon University, October 1992. [ 3 ] Ding Yuan, Yu Luo, Xin Zhuang, et al.: “ Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems ,” at 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI), October 2014. [ 4 ] Yury Izrailevsky and Ariel Tseitlin: “ The Netflix Simian Army ,” techblog.netflix.com , July 19, 2011. [ 5 ] Daniel Ford, François Labelle, Florentina I. Popovici, et al.: “ Availability in Globally Distributed Storage Systems ,” at 9th USENIX Symposium on Operating Systems Design and Implementation (OSDI), October 2010. [ 6 ] Brian Beach: “ Hard Drive Reliability Update – Sep 2014 ,” backblaze.com , September 23, 2014. [ 7 ] Laurie Voss: “ AWS: The Good, the Bad and the Ugly ,” blog.awe.sm , December 18, 2012. [ 8 ] Haryadi S. Gunawi, Mingzhe Hao, Tanakorn Leesatapornwongsa, et al.: “ What Bugs Live in the Cloud? ,” at 5th ACM Symposium on Cloud Computing (SoCC), November 2014. doi:10.1145/2670979.2670986 [ 9 ] Nelson Minar: “ Leap Second Crashes Half the Internet ,” somebits.com , July 3, 2012. [ 10 ] Amazon Web Services: “ Summary of the Amazon EC2 and Amazon RDS Service Disruption in the US East Region ,” aws.amazon.com , April 29, 2011. [ 11 ] Richard I. Cook: “ How Complex Systems Fail ,” Cognitive Technologies Laboratory, April 2000. [ 12 ] Jay Kreps: “ Getting Real About Distributed System Reliability ,” blog.empathybox.com , March 19, 2012. [ 13 ] David Oppenheimer, Archana Ganapathi, and David A. Patterson: “ Why Do Internet Services Fail, and What Can Be Done About It? ,” at 4th USENIX Symposium on Internet Technologies and Systems (USITS), March 2003. [ 14 ] Nathan Marz: “ Principles of Software Engineering, Part 1 ,” nathanmarz.com , April 2, 2013. [ 15 ] Michael Jurewitz: “ The Human Impact of Bugs ,” jury.me , March 15, 2013. [ 16 ] Raffi Krikorian: “ Timelines at Scale ,” at QCon San Francisco , November 2012. [ 17 ] Martin Fowler: Patterns of Enterprise Application Architecture . Addison Wesley, 2002. ISBN: 978-0-321-12742-6 [ 18 ] Kelly Sommers: “ After all that run around, what caused 500ms disk latency even when we replaced physical server? ” twitter.com , November 13, 2014. [ 19 ] Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, et al.: “ Dynamo: Amazon’s Highly Available Key-Value Store ,” at 21st ACM Symposium on Operating Systems Principles (SOSP), October 2007. [ 20 ] Greg Linden: “ Make Data Useful ,” slides from presentation at Stanford University Data Mining class (CS345), December 2006. [ 21 ] Tammy Everts: “ The Real Cost of Slow Time vs Downtime ,” webperformancetoday.com , November 12, 2014. [ 22 ] Jake Brutlag: “ Speed Matters for Google Web Search ,” googleresearch.blogspot.co.uk , June 22, 2009. [ 23 ] Tyler Treat: “ Everything You Know About Latency Is Wrong ,” bravenewgeek.com , December 12, 2015. [ 24 ] Jeffrey Dean and Luiz André Barroso: “ The Tail at Scale ,” Communications of the ACM , volume 56, number 2, pages 74–80, February 2013. doi:10.1145/2408776.2408794 [ 25 ] Graham Cormode, Vladislav Shkapenyuk, Divesh Srivastava, and Bojian Xu: “ Forward Decay: A Practical Time Decay Model for Streaming Systems ,” at 25th IEEE International Conference on Data Engineering (ICDE), March 2009. [ 26 ] Ted Dunning and Otmar Ertl: “ Computing Extremely Accurate Quantiles Using t-Digests ,” github.com , March 2014. [ 27 ] Gil Tene: “ HdrHistogram ,” hdrhistogram.org . [ 28 ] Baron Schwartz: “ Why Percentiles Don’t Work the Way You Think ,” vividcortex.com , December 7, 2015. [ 29 ] James Hamilton: “ On Designing and Deploying Internet-Scale Services ,” at 21st Large Installation System Administration Conference (LISA), November 2007. [ 30 ] Brian Foote and Joseph Yoder: “ Big Ball of Mud ,” at 4th Conference on Pattern Languages of Programs (PLoP), September 1997. [ 31 ] Frederick P Brooks: “No Silver Bullet – Essence and Accident in Software Engineering,” in The Mythical Man-Month , Anniversary edition, Addison-Wesley, 1995. ISBN: 978-0-201-83595-3 [ 32 ] Ben Moseley and Peter Marks: “ Out of the Tar Pit ,” at BCS Software Practice Advancement (SPA), 2006. [ 33 ] Rich Hickey: “ Simple Made Easy ,” at Strange Loop , September 2011. [ 34 ] Hongyu Pei Breivold, Ivica Crnkovic, and Peter J. Eriksson: “ Analyzing Software Evolvability ,” at 32nd Annual IEEE International Computer Software and Applications Conference (COMPSAC), July 2008. doi:10.1109/COMPSAC.2008.50
text00006.html
8
Chapter 2.Data Models and Query Languages
Chapter 2. Data Models and Query Languages The limits of my language mean the limits of my world. Ludwig Wittgenstein, Tractatus Logico-Philosophicus (1922) Data models are perhaps the most important part of developing software, because they have such a profound effect: not only on how the software is written, but also on how we think about the problem that we are solving. Most applications are built by layering one data model on top of another. For each layer, the key question is: how is it represented in terms of the next-lower layer? For example: As an application developer, you look at the real world (in which there are people, organizations, goods, actions, money flows, sensors, etc.) and model it in terms of objects or data structures, and APIs that manipulate those data structures. Those structures are often specific to your application. When you want to store those data structures, you express them in terms of a general-purpose data model, such as JSON or XML documents, tables in a relational database, or a graph model. The engineers who built your database software decided on a way of representing that JSON/XML/relational/graph data in terms of bytes in memory, on disk, or on a network. The representation may allow the data to be queried, searched, manipulated, and processed in various ways. On yet lower levels, hardware engineers have figured out how to represent bytes in terms of electrical currents, pulses of light, magnetic fields, and more. In a complex application there may be more intermediary levels, such as APIs built upon APIs, but the basic idea is still the same: each layer hides the complexity of the layers below it by providing a clean data model. These abstractions allow different groups of people — for example, the engineers at the database vendor and the application developers using their database — to work together effectively. There are many different kinds of data models, and every data model embodies assumptions about how it is going to be used. Some kinds of usage are easy and some are not supported; some operations are fast and some perform badly; some data transformations feel natural and some are awkward. It can take a lot of effort to master just one data model (think how many books there are on relational data modeling). Building software is hard enough, even when working with just one data model and without worrying about its inner workings. But since the data model has such a profound effect on what the software above it can and can’t do, it’s important to choose one that is appropriate to the application. In this chapter we will look at a range of general-purpose data models for data storage and querying (point 2 in the preceding list). In particular, we will compare the relational model, the document model, and a few graph-based data models. We will also look at various query languages and compare their use cases. In Chapter 3 we will discuss how storage engines work; that is, how these data models are actually implemented (point 3 in the list). Relational Model Versus Document Model The best-known data model today is probably that of SQL, based on the relational model proposed by Edgar Codd in 1970 [ 1 ]: data is organized into relations (called tables in SQL), where each relation is an unordered collection of tuples ( rows in SQL). The relational model was a theoretical proposal, and many people at the time doubted whether it could be implemented efficiently. However, by the mid-1980s, relational database management systems (RDBMSes) and SQL had become the tools of choice for most people who needed to store and query data with some kind of regular structure. The dominance of relational databases has lasted around 25‒30 years — an eternity in computing history. The roots of relational databases lie in business data processing , which was performed on mainframe computers in the 1960s and ’70s. The use cases appear mundane from today’s perspective: typically transaction processing (entering sales or banking transactions, airline reservations, stock-keeping in warehouses) and batch processing (customer invoicing, payroll, reporting). Other databases at that time forced application developers to think a lot about the internal representation of the data in the database. The goal of the relational model was to hide that implementation detail behind a cleaner interface. Over the years, there have been many competing approaches to data storage and querying. In the 1970s and early 1980s, the network model and the hierarchical model were the main alternatives, but the relational model came to dominate them. Object databases came and went again in the late 1980s and early 1990s. XML databases appeared in the early 2000s, but have only seen niche adoption. Each competitor to the relational model generated a lot of hype in its time, but it never lasted [ 2 ]. As computers became vastly more powerful and networked, they started being used for increasingly diverse purposes. And remarkably, relational databases turned out to generalize very well, beyond their original scope of business data processing, to a broad variety of use cases. Much of what you see on the web today is still powered by relational databases, be it online publishing, discussion, social networking, ecommerce, games, software-as-a-service productivity applications, or much more. The Birth of NoSQL Now, in the 2010s, NoSQL is the latest attempt to overthrow the relational model’s dominance. The name “NoSQL” is unfortunate, since it doesn’t actually refer to any particular technology — it was originally intended simply as a catchy Twitter hashtag for a meetup on open source, distributed, nonrelational databases in 2009 [ 3 ]. Nevertheless, the term struck a nerve and quickly spread through the web startup community and beyond. A number of interesting database systems are now associated with the #NoSQL hashtag, and it has been retroactively reinterpreted as Not Only SQL [ 4 ]. There are several driving forces behind the adoption of NoSQL databases, including: A need for greater scalability than relational databases can easily achieve, including very large datasets or very high write throughput A widespread preference for free and open source software over commercial database products Specialized query operations that are not well supported by the relational model Frustration with the restrictiveness of relational schemas, and a desire for a more dynamic and expressive data model [ 5 ] Different applications have different requirements, and the best choice of technology for one use case may well be different from the best choice for another use case. It therefore seems likely that in the foreseeable future, relational databases will continue to be used alongside a broad variety of nonrelational datastores — an idea that is sometimes called polyglot persistence [ 3 ]. The Object-Relational Mismatch Most application development today is done in object-oriented programming languages, which leads to a common criticism of the SQL data model: if data is stored in relational tables, an awkward translation layer is required between the objects in the application code and the database model of tables, rows, and columns. The disconnect between the models is sometimes called an impedance mismatch . i Object-relational mapping (ORM) frameworks like ActiveRecord and Hibernate reduce the amount of boilerplate code required for this translation layer, but they can’t completely hide the differences between the two models. For example, Figure 2-1 illustrates how a résumé (a LinkedIn profile) could be expressed in a relational schema. The profile as a whole can be identified by a unique identifier, user_id . Fields like first_name and last_name appear exactly once per user, so they can be modeled as columns on the users table. However, most people have had more than one job in their career (positions), and people may have varying numbers of periods of education and any number of pieces of contact information. There is a one-to-many relationship from the user to these items, which can be represented in various ways: In the traditional SQL model (prior to SQL:1999), the most common normalized representation is to put positions, education, and contact information in separate tables, with a foreign key reference to the users table, as in Figure 2-1 . Later versions of the SQL standard added support for structured datatypes and XML data; this allowed multi-valued data to be stored within a single row, with support for querying and indexing inside those documents. These features are supported to varying degrees by Oracle, IBM DB2, MS SQL Server, and PostgreSQL [ 6 , 7 ]. A JSON datatype is also supported by several databases, including IBM DB2, MySQL, and PostgreSQL [ 8 ]. A third option is to encode jobs, education, and contact info as a JSON or XML document, store it on a text column in the database, and let the application interpret its structure and content. In this setup, you typically cannot use the database to query for values inside that encoded column. Figure 2-1. Representing a LinkedIn profile using a relational schema. Photo of Bill Gates courtesy of Wikimedia Commons, Ricardo Stuckert, Agência Brasil. For a data structure like a résumé, which is mostly a self-contained document , a JSON representation can be quite appropriate: see Example 2-1 . JSON has the appeal of being much simpler than XML. Document-oriented databases like MongoDB [ 9 ], RethinkDB [ 10 ], CouchDB [ 11 ], and Espresso [ 12 ] support this data model. Example 2-1. Representing a LinkedIn profile as a JSON document { "user_id" : 251 , "first_name" : "Bill" , "last_name" : "Gates" , "summary" : "Co-chair of the Bill & Melinda Gates... Active blogger." , "region_id" : "us:91" , "industry_id" : 131 , "photo_url" : "/p/7/000/253/05b/308dd6e.jpg" , "positions" : [ { "job_title" : "Co-chair" , "organization" : "Bill & Melinda Gates Foundation" }, { "job_title" : "Co-founder, Chairman" , "organization" : "Microsoft" } ], "education" : [ { "school_name" : "Harvard University" , "start" : 1973 , "end" : 1975 }, { "school_name" : "Lakeside School, Seattle" , "start" : null , "end" : null } ], "contact_info" : { "blog" : "http://thegatesnotes.com" , "twitter" : "http://twitter.com/BillGates" } } Some developers feel that the JSON model reduces the impedance mismatch between the application code and the storage layer. However, as we shall see in Chapter 4 , there are also problems with JSON as a data encoding format. The lack of a schema is often cited as an advantage; we will discuss this in “Schema flexibility in the document model” . The JSON representation has better locality than the multi-table schema in Figure 2-1 . If you want to fetch a profile in the relational example, you need to either perform multiple queries (query each table by user_id ) or perform a messy multi-way join between the users table and its subordinate tables. In the JSON representation, all the relevant information is in one place, and one query is sufficient. The one-to-many relationships from the user profile to the user’s positions, educational history, and contact information imply a tree structure in the data, and the JSON representation makes this tree structure explicit (see Figure 2-2 ). Figure 2-2. One-to-many relationships forming a tree structure. Many-to-One and Many-to-Many Relationships In Example 2-1 in the preceding section, region_id and industry_id are given as IDs, not as plain-text strings "Greater Seattle Area" and "Philanthropy" . Why? If the user interface has free-text fields for entering the region and the industry, it makes sense to store them as plain-text strings. But there are advantages to having standardized lists of geographic regions and industries, and letting users choose from a drop-down list or autocompleter: Consistent style and spelling across profiles Avoiding ambiguity (e.g., if there are several cities with the same name) Ease of updating — the name is stored in only one place, so it is easy to update across the board if it ever needs to be changed (e.g., change of a city name due to political events) Localization support — when the site is translated into other languages, the standardized lists can be localized, so the region and industry can be displayed in the viewer’s language Better search — e.g., a search for philanthropists in the state of Washington can match this profile, because the list of regions can encode the fact that Seattle is in Washington (which is not apparent from the string "Greater Seattle Area" ) Whether you store an ID or a text string is a question of duplication. When you use an ID, the information that is meaningful to humans (such as the word Philanthropy ) is stored in only one place, and everything that refers to it uses an ID (which only has meaning within the database). When you store the text directly, you are duplicating the human-meaningful information in every record that uses it. The advantage of using an ID is that because it has no meaning to humans, it never needs to change: the ID can remain the same, even if the information it identifies changes. Anything that is meaningful to humans may need to change sometime in the future — and if that information is duplicated, all the redundant copies need to be updated. That incurs write overheads, and risks inconsistencies (where some copies of the information are updated but others aren’t). Removing such duplication is the key idea behind normalization in databases. ii Note Database administrators and developers love to argue about normalization and denormalization, but we will suspend judgment for now. In Part III of this book we will return to this topic and explore systematic ways of dealing with caching, denormalization, and derived data. Unfortunately, normalizing this data requires many-to-one relationships (many people live in one particular region, many people work in one particular industry), which don’t fit nicely into the document model. In relational databases, it’s normal to refer to rows in other tables by ID, because joins are easy. In document databases, joins are not needed for one-to-many tree structures, and support for joins is often weak. iii If the database itself does not support joins, you have to emulate a join in application code by making multiple queries to the database. (In this case, the lists of regions and industries are probably small and slow-changing enough that the application can simply keep them in memory. But nevertheless, the work of making the join is shifted from the database to the application code.) Moreover, even if the initial version of an application fits well in a join-free document model, data has a tendency of becoming more interconnected as features are added to applications. For example, consider some changes we could make to the résumé example: Organizations and schools as entities In the previous description, organization (the company where the user worked) and school_name (where they studied) are just strings. Perhaps they should be references to entities instead? Then each organization, school, or university could have its own web page (with logo, news feed, etc.); each résumé could link to the organizations and schools that it mentions, and include their logos and other information (see Figure 2-3 for an example from LinkedIn). Recommendations Say you want to add a new feature: one user can write a recommendation for another user. The recommendation is shown on the résumé of the user who was recommended, together with the name and photo of the user making the recommendation. If the recommender updates their photo, any recommendations they have written need to reflect the new photo. Therefore, the recommendation should have a reference to the author’s profile. Figure 2-3. The company name is not just a string, but a link to a company entity. Screenshot of linkedin.com. Figure 2-4 illustrates how these new features require many-to-many relationships. The data within each dotted rectangle can be grouped into one document, but the references to organizations, schools, and other users need to be represented as references, and require joins when queried. Figure 2-4. Extending résumés with many-to-many relationships. Are Document Databases Repeating History? While many-to-many relationships and joins are routinely used in relational databases, document databases and NoSQL reopened the debate on how best to represent such relationships in a database. This debate is much older than NoSQL — in fact, it goes back to the very earliest computerized database systems. The most popular database for business data processing in the 1970s was IBM’s Information Management System (IMS), originally developed for stock-keeping in the Apollo space program and first commercially released in 1968 [ 13 ]. It is still in use and maintained today, running on OS/390 on IBM mainframes [ 14 ]. The design of IMS used a fairly simple data model called the hierarchical model , which has some remarkable similarities to the JSON model used by document databases [ 2 ]. It represented all data as a tree of records nested within records, much like the JSON structure of Figure 2-2 . Like document databases, IMS worked well for one-to-many relationships, but it made many-to-many relationships difficult, and it didn’t support joins. Developers had to decide whether to duplicate (denormalize) data or to manually resolve references from one record to another. These problems of the 1960s and ’70s were very much like the problems that developers are running into with document databases today [ 15 ]. Various solutions were proposed to solve the limitations of the hierarchical model. The two most prominent were the relational model (which became SQL, and took over the world) and the network model (which initially had a large following but eventually faded into obscurity). The “great debate” between these two camps lasted for much of the 1970s [ 2 ]. Since the problem that the two models were solving is still so relevant today, it’s worth briefly revisiting this debate in today’s light. The network model The network model was standardized by a committee called the Conference on Data Systems Languages (CODASYL) and implemented by several different database vendors; it is also known as the CODASYL model [ 16 ]. The CODASYL model was a generalization of the hierarchical model. In the tree structure of the hierarchical model, every record has exactly one parent; in the network model, a record could have multiple parents. For example, there could be one record for the "Greater Seattle Area" region, and every user who lived in that region could be linked to it. This allowed many-to-one and many-to-many relationships to be modeled. The links between records in the network model were not foreign keys, but more like pointers in a programming language (while still being stored on disk). The only way of accessing a record was to follow a path from a root record along these chains of links. This was called an access path . In the simplest case, an access path could be like the traversal of a linked list: start at the head of the list, and look at one record at a time until you find the one you want. But in a world of many-to-many relationships, several different paths can lead to the same record, and a programmer working with the network model had to keep track of these different access paths in their head. A query in CODASYL was performed by moving a cursor through the database by iterating over lists of records and following access paths. If a record had multiple parents (i.e., multiple incoming pointers from other records), the application code had to keep track of all the various relationships. Even CODASYL committee members admitted that this was like navigating around an n -dimensional data space [ 17 ]. Although manual access path selection was able to make the most efficient use of the very limited hardware capabilities in the 1970s (such as tape drives, whose seeks are extremely slow), the problem was that they made the code for querying and updating the database complicated and inflexible. With both the hierarchical and the network model, if you didn’t have a path to the data you wanted, you were in a difficult situation. You could change the access paths, but then you had to go through a lot of handwritten database query code and rewrite it to handle the new access paths. It was difficult to make changes to an application’s data model. The relational model What the relational model did, by contrast, was to lay out all the data in the open: a relation (table) is simply a collection of tuples (rows), and that’s it. There are no labyrinthine nested structures, no complicated access paths to follow if you want to look at the data. You can read any or all of the rows in a table, selecting those that match an arbitrary condition. You can read a particular row by designating some columns as a key and matching on those. You can insert a new row into any table without worrying about foreign key relationships to and from other tables. iv In a relational database, the query optimizer automatically decides which parts of the query to execute in which order, and which indexes to use. Those choices are effectively the “access path,” but the big difference is that they are made automatically by the query optimizer, not by the application developer, so we rarely need to think about them. If you want to query your data in new ways, you can just declare a new index, and queries will automatically use whichever indexes are most appropriate. You don’t need to change your queries to take advantage of a new index. (See also “Query Languages for Data” .) The relational model thus made it much easier to add new features to applications. Query optimizers for relational databases are complicated beasts, and they have consumed many years of research and development effort [ 18 ]. But a key insight of the relational model was this: you only need to build a query optimizer once, and then all applications that use the database can benefit from it. If you don’t have a query optimizer, it’s easier to handcode the access paths for a particular query than to write a general-purpose optimizer — but the general-purpose solution wins in the long run. Comparison to document databases Document databases reverted back to the hierarchical model in one aspect: storing nested records (one-to-many relationships, like positions , education , and contact_info in Figure 2-1 ) within their parent record rather than in a separate table. However, when it comes to representing many-to-one and many-to-many relationships, relational and document databases are not fundamentally different: in both cases, the related item is referenced by a unique identifier, which is called a foreign key in the relational model and a document reference in the document model [ 9 ]. That identifier is resolved at read time by using a join or follow-up queries. To date, document databases have not followed the path of CODASYL. Relational Versus Document Databases Today There are many differences to consider when comparing relational databases to document databases, including their fault-tolerance properties (see Chapter 5 ) and handling of concurrency (see Chapter 7 ). In this chapter, we will concentrate only on the differences in the data model. The main arguments in favor of the document data model are schema flexibility, better performance due to locality, and that for some applications it is closer to the data structures used by the application. The relational model counters by providing better support for joins, and many-to-one and many-to-many relationships. Which data model leads to simpler application code? If the data in your application has a document-like structure (i.e., a tree of one-to-many relationships, where typically the entire tree is loaded at once), then it’s probably a good idea to use a document model. The relational technique of shredding — splitting a document-like structure into multiple tables (like positions , education , and contact_info in Figure 2-1 ) — can lead to cumbersome schemas and unnecessarily complicated application code. The document model has limitations: for example, you cannot refer directly to a nested item within a document, but instead you need to say something like “the second item in the list of positions for user 251” (much like an access path in the hierarchical model). However, as long as documents are not too deeply nested, that is not usually a problem. The poor support for joins in document databases may or may not be a problem, depending on the application. For example, many-to-many relationships may never be needed in an analytics application that uses a document database to record which events occurred at which time [ 19 ]. However, if your application does use many-to-many relationships, the document model becomes less appealing. It’s possible to reduce the need for joins by denormalizing, but then the application code needs to do additional work to keep the denormalized data consistent. Joins can be emulated in application code by making multiple requests to the database, but that also moves complexity into the application and is usually slower than a join performed by specialized code inside the database. In such cases, using a document model can lead to significantly more complex application code and worse performance [ 15 ]. It’s not possible to say in general which data model leads to simpler application code; it depends on the kinds of relationships that exist between data items. For highly interconnected data, the document model is awkward, the relational model is acceptable, and graph models (see “Graph-Like Data Models” ) are the most natural. Schema flexibility in the document model Most document databases, and the JSON support in relational databases, do not enforce any schema on the data in documents. XML support in relational databases usually comes with optional schema validation. No schema means that arbitrary keys and values can be added to a document, and when reading, clients have no guarantees as to what fields the documents may contain. Document databases are sometimes called schemaless , but that’s misleading, as the code that reads the data usually assumes some kind of structure — i.e., there is an implicit schema, but it is not enforced by the database [ 20 ]. A more accurate term is schema-on-read (the structure of the data is implicit, and only interpreted when the data is read), in contrast with schema-on-write (the traditional approach of relational databases, where the schema is explicit and the database ensures all written data conforms to it) [ 21 ]. Schema-on-read is similar to dynamic (runtime) type checking in programming languages, whereas schema-on-write is similar to static (compile-time) type checking. Just as the advocates of static and dynamic type checking have big debates about their relative merits [ 22 ], enforcement of schemas in database is a contentious topic, and in general there’s no right or wrong answer. The difference between the approaches is particularly noticeable in situations where an application wants to change the format of its data. For example, say you are currently storing each user’s full name in one field, and you instead want to store the first name and last name separately [ 23 ]. In a document database, you would just start writing new documents with the new fields and have code in the application that handles the case when old documents are read. For example: if ( user && user . name && ! user . first_name ) { // Documents written before Dec 8, 2013 don't have first_name user . first_name = user . name . split ( " " )[ 0 ]; } On the other hand, in a “statically typed” database schema, you would typically perform a migration along the lines of: ALTER TABLE users ADD COLUMN first_name text ; UPDATE users SET first_name = split_part ( name , ' ' , 1 ); -- PostgreSQL UPDATE users SET first_name = substring_index ( name , ' ' , 1 ); -- MySQL Schema changes have a bad reputation of being slow and requiring downtime. This reputation is not entirely deserved: most relational database systems execute the ALTER TABLE statement in a few milliseconds. MySQL is a notable exception — it copies the entire table on ALTER TABLE , which can mean minutes or even hours of downtime when altering a large table — although various tools exist to work around this limitation [ 24 , 25 , 26 ]. Running the UPDATE statement on a large table is likely to be slow on any database, since every row needs to be rewritten. If that is not acceptable, the application can leave first_name set to its default of NULL and fill it in at read time, like it would with a document database. The schema-on-read approach is advantageous if the items in the collection don’t all have the same structure for some reason (i.e., the data is heterogeneous) — for example, because: There are many different types of objects, and it is not practical to put each type of object in its own table. The structure of the data is determined by external systems over which you have no control and which may change at any time. In situations like these, a schema may hurt more than it helps, and schemaless documents can be a much more natural data model. But in cases where all records are expected to have the same structure, schemas are a useful mechanism for documenting and enforcing that structure. We will discuss schemas and schema evolution in more detail in Chapter 4 . Data locality for queries A document is usually stored as a single continuous string, encoded as JSON, XML, or a binary variant thereof (such as MongoDB’s BSON). If your application often needs to access the entire document (for example, to render it on a web page), there is a performance advantage to this storage locality . If data is split across multiple tables, like in Figure 2-1 , multiple index lookups are required to retrieve it all, which may require more disk seeks and take more time. The locality advantage only applies if you need large parts of the document at the same time. The database typically needs to load the entire document, even if you access only a small portion of it, which can be wasteful on large documents. On updates to a document, the entire document usually needs to be rewritten — only modifications that don’t change the encoded size of a document can easily be performed in place [ 19 ]. For these reasons, it is generally recommended that you keep documents fairly small and avoid writes that increase the size of a document [ 9 ]. These performance limitations significantly reduce the set of situations in which document databases are useful. It’s worth pointing out that the idea of grouping related data together for locality is not limited to the document model. For example, Google’s Spanner database offers the same locality properties in a relational data model, by allowing the schema to declare that a table’s rows should be interleaved (nested) within a parent table [ 27 ]. Oracle allows the same, using a feature called multi-table index cluster tables [ 28 ]. The column-family concept in the Bigtable data model (used in Cassandra and HBase) has a similar purpose of managing locality [ 29 ]. We will also see more on locality in Chapter 3 . Convergence of document and relational databases Most relational database systems (other than MySQL) have supported XML since the mid-2000s. This includes functions to make local modifications to XML documents and the ability to index and query inside XML documents, which allows applications to use data models very similar to what they would do when using a document database. PostgreSQL since version 9.3 [ 8 ], MySQL since version 5.7, and IBM DB2 since version 10.5 [ 30 ] also have a similar level of support for JSON documents. Given the popularity of JSON for web APIs, it is likely that other relational databases will follow in their footsteps and add JSON support. On the document database side, RethinkDB supports relational-like joins in its query language, and some MongoDB drivers automatically resolve database references (effectively performing a client-side join, although this is likely to be slower than a join performed in the database since it requires additional network round-trips and is less optimized). It seems that relational and document databases are becoming more similar over time, and that is a good thing: the data models complement each other. v If a database is able to handle document-like data and also perform relational queries on it, applications can use the combination of features that best fits their needs. A hybrid of the relational and document models is a good route for databases to take in the future. Query Languages for Data When the relational model was introduced, it included a new way of querying data: SQL is a declarative query language, whereas IMS and CODASYL queried the database using imperative code. What does that mean? Many commonly used programming languages are imperative. For example, if you have a list of animal species, you might write something like this to return only the sharks in the list: function getSharks () { var sharks = []; for ( var i = 0 ; i < animals . length ; i ++ ) { if ( animals [ i ]. family === "Sharks" ) { sharks . push ( animals [ i ]); } } return sharks ; } In the relational algebra, you would instead write: sharks  =  σ family = “Sharks” (animals) where σ (the Greek letter sigma) is the selection operator, returning only those animals that match the condition family = “Sharks” . When SQL was defined, it followed the structure of the relational algebra fairly closely: SELECT * FROM animals WHERE family = 'Sharks' ; An imperative language tells the computer to perform certain operations in a certain order. You can imagine stepping through the code line by line, evaluating conditions, updating variables, and deciding whether to go around the loop one more time. In a declarative query language, like SQL or relational algebra, you just specify the pattern of the data you want — what conditions the results must meet, and how you want the data to be transformed (e.g., sorted, grouped, and aggregated) — but not how to achieve that goal. It is up to the database system’s query optimizer to decide which indexes and which join methods to use, and in which order to execute various parts of the query. A declarative query language is attractive because it is typically more concise and easier to work with than an imperative API. But more importantly, it also hides implementation details of the database engine, which makes it possible for the database system to introduce performance improvements without requiring any changes to queries. For example, in the imperative code shown at the beginning of this section, the list of animals appears in a particular order. If the database wants to reclaim unused disk space behind the scenes, it might need to move records around, changing the order in which the animals appear. Can the database do that safely, without breaking queries? The SQL example doesn’t guarantee any particular ordering, and so it doesn’t mind if the order changes. But if the query is written as imperative code, the database can never be sure whether the code is relying on the ordering or not. The fact that SQL is more limited in functionality gives the database much more room for automatic optimizations. Finally, declarative languages often lend themselves to parallel execution. Today, CPUs are getting faster by adding more cores, not by running at significantly higher clock speeds than before [ 31 ]. Imperative code is very hard to parallelize across multiple cores and multiple machines, because it specifies instructions that must be performed in a particular order. Declarative languages have a better chance of getting faster in parallel execution because they specify only the pattern of the results, not the algorithm that is used to determine the results. The database is free to use a parallel implementation of the query language, if appropriate [ 32 ]. Declarative Queries on the Web The advantages of declarative query languages are not limited to just databases. To illustrate the point, let’s compare declarative and imperative approaches in a completely different environment: a web browser. Say you have a website about animals in the ocean. The user is currently viewing the page on sharks, so you mark the navigation item “Sharks” as currently selected, like this: <ul > <li class= "selected" > <p > Sharks </p> <ul > <li > Great White Shark </li> <li > Tiger Shark </li> <li > Hammerhead Shark </li> </ul> </li> <li > <p > Whales </p> <ul > <li > Blue Whale </li> <li > Humpback Whale </li> <li > Fin Whale </li> </ul> </li> </ul> The selected item is marked with the CSS class "selected" . <p>Sharks</p> is the title of the currently selected page. Now say you want the title of the currently selected page to have a blue background, so that it is visually highlighted. This is easy, using CSS: li .selected > p { background-color : blue ; } Here the CSS selector li.selected > p declares the pattern of elements to which we want to apply the blue style: namely, all <p> elements whose direct parent is an <li> element with a CSS class of selected . The element <p>Sharks</p> in the example matches this pattern, but <p>Whales</p> does not match because its <li> parent lacks class="selected" . If you were using XSL instead of CSS, you could do something similar: <xsl:template match= "li[@class='selected']/p" > <fo:block background-color= "blue" > <xsl:apply-templates /> </fo:block> </xsl:template> Here, the XPath expression li[@class='selected']/p is equivalent to the CSS selector li.selected > p in the previous example. What CSS and XSL have in common is that they are both declarative languages for specifying the styling of a document. Imagine what life would be like if you had to use an imperative approach. In JavaScript, using the core Document Object Model (DOM) API, the result might look something like this: var liElements = document . getElementsByTagName ( "li" ); for ( var i = 0 ; i < liElements . length ; i ++ ) { if ( liElements [ i ]. className === "selected" ) { var children = liElements [ i ]. childNodes ; for ( var j = 0 ; j < children . length ; j ++ ) { var child = children [ j ]; if ( child . nodeType === Node . ELEMENT_NODE && child . tagName === "P" ) { child . setAttribute ( "style" , "background-color: blue" ); } } } } This JavaScript imperatively sets the element <p>Sharks</p> to have a blue background, but the code is awful. Not only is it much longer and harder to understand than the CSS and XSL equivalents, but it also has some serious problems: If the selected class is removed (e.g., because the user clicks a different page), the blue color won’t be removed, even if the code is rerun — and so the item will remain highlighted until the entire page is reloaded. With CSS, the browser automatically detects when the li.selected > p rule no longer applies and removes the blue background as soon as the selected class is removed. If you want to take advantage of a new API, such as document.getElementsByClassName("selected") or even document.evaluate() — which may improve performance — you have to rewrite the code. On the other hand, browser vendors can improve the performance of CSS and XPath without breaking compatibility. In a web browser, using declarative CSS styling is much better than manipulating styles imperatively in JavaScript. Similarly, in databases, declarative query languages like SQL turned out to be much better than imperative query APIs. vi MapReduce Querying MapReduce is a programming model for processing large amounts of data in bulk across many machines, popularized by Google [ 33 ]. A limited form of MapReduce is supported by some NoSQL datastores, including MongoDB and CouchDB, as a mechanism for performing read-only queries across many documents. MapReduce in general is described in more detail in Chapter 10 . For now, we’ll just briefly discuss MongoDB’s use of the model. MapReduce is neither a declarative query language nor a fully imperative query API, but somewhere in between: the logic of the query is expressed with snippets of code, which are called repeatedly by the processing framework. It is based on the map (also known as collect ) and reduce (also known as fold or inject ) functions that exist in many functional programming languages. To give an example, imagine you are a marine biologist, and you add an observation record to your database every time you see animals in the ocean. Now you want to generate a report saying how many sharks you have sighted per month. In PostgreSQL you might express that query like this: SELECT date_trunc ( 'month' , observation_timestamp ) AS observation_month , sum ( num_animals ) AS total_animals FROM observations WHERE family = 'Sharks' GROUP BY observation_month ; The date_trunc('month', timestamp) function determines the calendar month containing timestamp , and returns another timestamp representing the beginning of that month. In other words, it rounds a timestamp down to the nearest month. This query first filters the observations to only show species in the Sharks family, then groups the observations by the calendar month in which they occurred, and finally adds up the number of animals seen in all observations in that month. The same can be expressed with MongoDB’s MapReduce feature as follows: db . observations . mapReduce ( function map ( ) { var year = this . observationTimestamp . getFullYear ( ) ; var month = this . observationTimestamp . getMonth ( ) + 1 ; emit ( year + "-" + month , this . numAnimals ) ; } , function reduce ( key , values ) { return Array . sum ( values ) ; } , { query : { family : "Sharks" } , out : "monthlySharkReport" } ) ; The filter to consider only shark species can be specified declaratively (this is a MongoDB-specific extension to MapReduce). The JavaScript function map is called once for every document that matches query , with this set to the document object. The map function emits a key (a string consisting of year and month, such as "2013-12" or "2014-1" ) and a value (the number of animals in that observation). The key-value pairs emitted by map are grouped by key. For all key-value pairs with the same key (i.e., the same month and year), the reduce function is called once. The reduce function adds up the number of animals from all observations in a particular month. The final output is written to the collection monthlySharkReport . For example, say the observations collection contains these two documents: { observationTimestamp : Date . parse ( "Mon, 25 Dec 1995 12:34:56 GMT" ), family : "Sharks" , species : "Carcharodon carcharias" , numAnimals : 3 } { observationTimestamp : Date . parse ( "Tue, 12 Dec 1995 16:17:18 GMT" ), family : "Sharks" , species : "Carcharias taurus" , numAnimals : 4 } The map function would be called once for each document, resulting in emit("1995-12", 3) and emit("1995-12", 4) . Subsequently, the reduce function would be called with reduce("1995-12", [3, 4]) , returning 7 . The map and reduce functions are somewhat restricted in what they are allowed to do. They must be pure functions, which means they only use the data that is passed to them as input, they cannot perform additional database queries, and they must not have any side effects. These restrictions allow the database to run the functions anywhere, in any order, and rerun them on failure. However, they are nevertheless powerful: they can parse strings, call library functions, perform calculations, and more. MapReduce is a fairly low-level programming model for distributed execution on a cluster of machines. Higher-level query languages like SQL can be implemented as a pipeline of MapReduce operations (see Chapter 10 ), but there are also many distributed implementations of SQL that don’t use MapReduce. Note there is nothing in SQL that constrains it to running on a single machine, and MapReduce doesn’t have a monopoly on distributed query execution. Being able to use JavaScript code in the middle of a query is a great feature for advanced queries, but it’s not limited to MapReduce — some SQL databases can be extended with JavaScript functions too [ 34 ]. A usability problem with MapReduce is that you have to write two carefully coordinated JavaScript functions, which is often harder than writing a single query. Moreover, a declarative query language offers more opportunities for a query optimizer to improve the performance of a query. For these reasons, MongoDB 2.2 added support for a declarative query language called the aggregation pipeline [ 9 ]. In this language, the same shark-counting query looks like this: db . observations . aggregate ([ { $match : { family : "Sharks" } }, { $group : { _id : { year : { $year : "$observationTimestamp" }, month : { $month : "$observationTimestamp" } }, totalAnimals : { $sum : "$numAnimals" } } } ]); The aggregation pipeline language is similar in expressiveness to a subset of SQL, but it uses a JSON-based syntax rather than SQL’s English-sentence-style syntax; the difference is perhaps a matter of taste. The moral of the story is that a NoSQL system may find itself accidentally reinventing SQL, albeit in disguise. Graph-Like Data Models We saw earlier that many-to-many relationships are an important distinguishing feature between different data models. If your application has mostly one-to-many relationships (tree-structured data) or no relationships between records, the document model is appropriate. But what if many-to-many relationships are very common in your data? The relational model can handle simple cases of many-to-many relationships, but as the connections within your data become more complex, it becomes more natural to start modeling your data as a graph. A graph consists of two kinds of objects: vertices (also known as nodes or entities ) and edges (also known as relationships or arcs ). Many kinds of data can be modeled as a graph. Typical examples include: Social graphs Vertices are people, and edges indicate which people know each other. The web graph Vertices are web pages, and edges indicate HTML links to other pages. Road or rail networks Vertices are junctions, and edges represent the roads or railway lines between them. Well-known algorithms can operate on these graphs: for example, car navigation systems search for the shortest path between two points in a road network, and PageRank can be used on the web graph to determine the popularity of a web page and thus its ranking in search results. In the examples just given, all the vertices in a graph represent the same kind of thing (people, web pages, or road junctions, respectively). However, graphs are not limited to such homogeneous data: an equally powerful use of graphs is to provide a consistent way of storing completely different types of objects in a single datastore. For example, Facebook maintains a single graph with many different types of vertices and edges: vertices represent people, locations, events, checkins, and comments made by users; edges indicate which people are friends with each other, which checkin happened in which location, who commented on which post, who attended which event, and so on [ 35 ]. In this section we will use the example shown in Figure 2-5 . It could be taken from a social network or a genealogical database: it shows two people, Lucy from Idaho and Alain from Beaune, France. They are married and living in London. Figure 2-5. Example of graph-structured data (boxes represent vertices, arrows represent edges). There are several different, but related, ways of structuring and querying data in graphs. In this section we will discuss the property graph model (implemented by Neo4j, Titan, and InfiniteGraph) and the triple-store model (implemented by Datomic, AllegroGraph, and others). We will look at three declarative query languages for graphs: Cypher, SPARQL, and Datalog. Besides these, there are also imperative graph query languages such as Gremlin [ 36 ] and graph processing frameworks like Pregel (see Chapter 10 ). Property Graphs In the property graph model, each vertex consists of: A unique identifier A set of outgoing edges A set of incoming edges A collection of properties (key-value pairs) Each edge consists of: A unique identifier The vertex at which the edge starts (the tail vertex ) The vertex at which the edge ends (the head vertex ) A label to describe the kind of relationship between the two vertices A collection of properties (key-value pairs) You can think of a graph store as consisting of two relational tables, one for vertices and one for edges, as shown in Example 2-2 (this schema uses the PostgreSQL json datatype to store the properties of each vertex or edge). The head and tail vertex are stored for each edge; if you want the set of incoming or outgoing edges for a vertex, you can query the edges table by head_vertex or tail_vertex , respectively. Example 2-2. Representing a property graph using a relational schema CREATE TABLE vertices ( vertex_id integer PRIMARY KEY , properties json ); CREATE TABLE edges ( edge_id integer PRIMARY KEY , tail_vertex integer REFERENCES vertices ( vertex_id ), head_vertex integer REFERENCES vertices ( vertex_id ), label text , properties json ); CREATE INDEX edges_tails ON edges ( tail_vertex ); CREATE INDEX edges_heads ON edges ( head_vertex ); Some important aspects of this model are: Any vertex can have an edge connecting it with any other vertex. There is no schema that restricts which kinds of things can or cannot be associated. Given any vertex, you can efficiently find both its incoming and its outgoing edges, and thus traverse the graph — i.e., follow a path through a chain of vertices — both forward and backward. (That’s why Example 2-2 has indexes on both the tail_vertex and head_vertex columns.) By using different labels for different kinds of relationships, you can store several different kinds of information in a single graph, while still maintaining a clean data model. Those features give graphs a great deal of flexibility for data modeling, as illustrated in Figure 2-5 . The figure shows a few things that would be difficult to express in a traditional relational schema, such as different kinds of regional structures in different countries (France has départements and régions , whereas the US has counties and states ), quirks of history such as a country within a country (ignoring for now the intricacies of sovereign states and nations), and varying granularity of data (Lucy’s current residence is specified as a city, whereas her place of birth is specified only at the level of a state). You could imagine extending the graph to also include many other facts about Lucy and Alain, or other people. For instance, you could use it to indicate any food allergies they have (by introducing a vertex for each allergen, and an edge between a person and an allergen to indicate an allergy), and link the allergens with a set of vertices that show which foods contain which substances. Then you could write a query to find out what is safe for each person to eat. Graphs are good for evolvability: as you add features to your application, a graph can easily be extended to accommodate changes in your application’s data structures. The Cypher Query Language Cypher is a declarative query language for property graphs, created for the Neo4j graph database [ 37 ]. (It is named after a character in the movie The Matrix and is not related to ciphers in cryptography [ 38 ].) Example 2-3 shows the Cypher query to insert the lefthand portion of Figure 2-5 into a graph database. The rest of the graph can be added similarly and is omitted for readability. Each vertex is given a symbolic name like USA or Idaho , and other parts of the query can use those names to create edges between the vertices, using an arrow notation: (Idaho) -[:WITHIN]-> (USA) creates an edge labeled WITHIN , with Idaho as the tail node and USA as the head node. Example 2-3. A subset of the data in Figure 2-5 , represented as a Cypher query CREATE (NAmerica:Location {name: 'North America' , type : 'continent' }), (USA:Location {name: 'United States' , type : 'country' }), (Idaho:Location {name: 'Idaho' , type : 'state' }), (Lucy:Person {name: 'Lucy' }), (Idaho) -[:WITHIN]-> (USA) -[:WITHIN]-> (NAmerica), (Lucy) -[:BORN_IN]-> (Idaho) When all the vertices and edges of Figure 2-5 are added to the database, we can start asking interesting questions: for example, find the names of all the people who emigrated from the United States to Europe . To be more precise, here we want to find all the vertices that have a BORN_IN edge to a location within the US, and also a LIVING_IN edge to a location within Europe, and return the name property of each of those vertices. Example 2-4 shows how to express that query in Cypher. The same arrow notation is used in a MATCH clause to find patterns in the graph: (person) -[:BORN_IN]-> () matches any two vertices that are related by an edge labeled BORN_IN . The tail vertex of that edge is bound to the variable person , and the head vertex is left unnamed. Example 2-4. Cypher query to find people who emigrated from the US to Europe MATCH (person) -[:BORN_IN]-> () -[:WITHIN*0..]-> (us:Location {name: 'United States' }), (person) -[:LIVES_IN]-> () -[:WITHIN*0..]-> (eu:Location {name: 'Europe' }) RETURN person.name The query can be read as follows: Find any vertex (call it person ) that meets both of the following conditions: person has an outgoing BORN_IN edge to some vertex. From that vertex, you can follow a chain of outgoing WITHIN edges until eventually you reach a vertex of type Location , whose name property is equal to "United States" . That same person vertex also has an outgoing LIVES_IN edge. Following that edge, and then a chain of outgoing WITHIN edges, you eventually reach a vertex of type Location , whose name property is equal to "Europe" . For each such person vertex, return the name property. There are several possible ways of executing the query. The description given here suggests that you start by scanning all the people in the database, examine each person’s birthplace and residence, and return only those people who meet the criteria. But equivalently, you could start with the two Location vertices and work backward. If there is an index on the name property, you can probably efficiently find the two vertices representing the US and Europe. Then you can proceed to find all locations (states, regions, cities, etc.) in the US and Europe respectively by following all incoming WITHIN edges. Finally, you can look for people who can be found through an incoming BORN_IN or LIVES_IN edge at one of the location vertices. As is typical for a declarative query language, you don’t need to specify such execution details when writing the query: the query optimizer automatically chooses the strategy that is predicted to be the most efficient, so you can get on with writing the rest of your application. Graph Queries in SQL Example 2-2 suggested that graph data can be represented in a relational database. But if we put graph data in a relational structure, can we also query it using SQL? The answer is yes, but with some difficulty. In a relational database, you usually know in advance which joins you need in your query. In a graph query, you may need to traverse a variable number of edges before you find the vertex you’re looking for — that is, the number of joins is not fixed in advance. In our example, that happens in the () -[:WITHIN*0..]-> () rule in the Cypher query. A person’s LIVES_IN edge may point at any kind of location: a street, a city, a district, a region, a state, etc. A city may be WITHIN a region, a region WITHIN a state, a state WITHIN a country, etc. The LIVES_IN edge may point directly at the location vertex you’re looking for, or it may be several levels removed in the location hierarchy. In Cypher, :WITHIN*0.. expresses that fact very concisely: it means “follow a WITHIN edge, zero or more times.” It is like the * operator in a regular expression. Since SQL:1999, this idea of variable-length traversal paths in a query can be expressed using something called recursive common table expressions (the WITH RECURSIVE syntax). Example 2-5 shows the same query — finding the names of people who emigrated from the US to Europe — expressed in SQL using this technique (supported in PostgreSQL, IBM DB2, Oracle, and SQL Server). However, the syntax is very clumsy in comparison to Cypher. Example 2-5. The same query as Example 2-4 , expressed in SQL using recursive common table expressions WITH RECURSIVE -- in_usa is the set of vertex IDs of all locations within the United States in_usa ( vertex_id ) AS ( SELECT vertex_id FROM vertices WHERE properties - > > 'name' = 'United States' UNION SELECT edges . tail_vertex FROM edges JOIN in_usa ON edges . head_vertex = in_usa . vertex_id WHERE edges . label = 'within' ) , -- in_europe is the set of vertex IDs of all locations within Europe in_europe ( vertex_id ) AS ( SELECT vertex_id FROM vertices WHERE properties - > > 'name' = 'Europe' UNION SELECT edges . tail_vertex FROM edges JOIN in_europe ON edges . head_vertex = in_europe . vertex_id WHERE edges . label = 'within' ) , -- born_in_usa is the set of vertex IDs of all people born in the US born_in_usa ( vertex_id ) AS ( SELECT edges . tail_vertex FROM edges JOIN in_usa ON edges . head_vertex = in_usa . vertex_id WHERE edges . label = 'born_in' ) , -- lives_in_europe is the set of vertex IDs of all people living in Europe lives_in_europe ( vertex_id ) AS ( SELECT edges . tail_vertex FROM edges JOIN in_europe ON edges . head_vertex = in_europe . vertex_id WHERE edges . label = 'lives_in' ) SELECT vertices . properties - > > 'name' FROM vertices -- join to find those people who were both born in the US *and* live in Europe JOIN born_in_usa ON vertices . vertex_id = born_in_usa . vertex_id JOIN lives_in_europe ON vertices . vertex_id = lives_in_europe . vertex_id ; First find the vertex whose name property has the value "United States" , and make it the first element of the set of vertices in_usa . Follow all incoming within edges from vertices in the set in_usa , and add them to the same set, until all incoming within edges have been visited. Do the same starting with the vertex whose name property has the value "Europe" , and build up the set of vertices in_europe . For each of the vertices in the set in_usa , follow incoming born_in edges to find people who were born in some place within the United States. Similarly, for each of the vertices in the set in_europe , follow incoming lives_in edges to find people who live in Europe. Finally, intersect the set of people born in the USA with the set of people living in Europe, by joining them. If the same query can be written in 4 lines in one query language but requires 29 lines in another, that just shows that different data models are designed to satisfy different use cases. It’s important to pick a data model that is suitable for your application. Triple-Stores and SPARQL The triple-store model is mostly equivalent to the property graph model, using different words to describe the same ideas. It is nevertheless worth discussing, because there are various tools and languages for triple-stores that can be valuable additions to your toolbox for building applications. In a triple-store, all information is stored in the form of very simple three-part statements: ( subject , predicate , object ). For example, in the triple ( Jim , likes , bananas ), Jim is the subject, likes is the predicate (verb), and bananas is the object. The subject of a triple is equivalent to a vertex in a graph. The object is one of two things: A value in a primitive datatype, such as a string or a number. In that case, the predicate and object of the triple are equivalent to the key and value of a property on the subject vertex. For example, ( lucy , age , 33 ) is like a vertex lucy with properties {"age":33} . Another vertex in the graph. In that case, the predicate is an edge in the graph, the subject is the tail vertex, and the object is the head vertex. For example, in ( lucy , marriedTo , alain ) the subject and object lucy and alain are both vertices, and the predicate marriedTo is the label of the edge that connects them. Example 2-6 shows the same data as in Example 2-3 , written as triples in a format called Turtle , a subset of Notation3 ( N3 ) [ 39 ]. Example 2-6. A subset of the data in Figure 2-5 , represented as Turtle triples @prefix : <urn:example:>. _:lucy a :Person. _:lucy :name "Lucy". _:lucy :bornIn _:idaho. _:idaho a :Location. _:idaho :name "Idaho". _:idaho :type "state". _:idaho :within _:usa. _:usa a :Location. _:usa :name "United States". _:usa :type "country". _:usa :within _:namerica. _:namerica a :Location. _:namerica :name "North America". _:namerica :type "continent". In this example, vertices of the graph are written as _: someName . The name doesn’t mean anything outside of this file; it exists only because we otherwise wouldn’t know which triples refer to the same vertex. When the predicate represents an edge, the object is a vertex, as in _:idaho :within _:usa . When the predicate is a property, the object is a string literal, as in _:usa :name "United States" . It’s quite repetitive to repeat the same subject over and over again, but fortunately you can use semicolons to say multiple things about the same subject. This makes the Turtle format quite nice and readable: see Example 2-7 . Example 2-7. A more concise way of writing the data in Example 2-6 @prefix : <urn:example:>. _:lucy a :Person; :name "Lucy"; :bornIn _:idaho. _:idaho a :Location; :name "Idaho"; :type "state"; :within _:usa. _:usa a :Location; :name "United States"; :type "country"; :within _:namerica. _:namerica a :Location; :name "North America"; :type "continent". The semantic web If you read more about triple-stores, you may get sucked into a maelstrom of articles written about the semantic web . The triple-store data model is completely independent of the semantic web — for example, Datomic [ 40 ] is a triple-store that does not claim to have anything to do with it. vii But since the two are so closely linked in many people’s minds, we should discuss them briefly. The semantic web is fundamentally a simple and reasonable idea: websites already publish information as text and pictures for humans to read, so why don’t they also publish information as machine-readable data for computers to read? The Resource Description Framework (RDF) [ 41 ] was intended as a mechanism for different websites to publish data in a consistent format, allowing data from different websites to be automatically combined into a web of data — a kind of internet-wide “database of everything.” Unfortunately, the semantic web was overhyped in the early 2000s but so far hasn’t shown any sign of being realized in practice, which has made many people cynical about it. It has also suffered from a dizzying plethora of acronyms, overly complex standards proposals, and hubris. However, if you look past those failings, there is also a lot of good work that has come out of the semantic web project. Triples can be a good internal data model for applications, even if you have no interest in publishing RDF data on the semantic web. The RDF data model The Turtle language we used in Example 2-7 is a human-readable format for RDF data. Sometimes RDF is also written in an XML format, which does the same thing much more verbosely — see Example 2-8 . Turtle/N3 is preferable as it is much easier on the eyes, and tools like Apache Jena [ 42 ] can automatically convert between different RDF formats if necessary. Example 2-8. The data of Example 2-7 , expressed using RDF/XML syntax <rdf:RDF xmlns= "urn:example:" xmlns:rdf= "http://www.w3.org/1999/02/22-rdf-syntax-ns#" > <Location rdf:nodeID= "idaho" > <name> Idaho </name> <type> state </type> <within> <Location rdf:nodeID= "usa" > <name> United States </name> <type> country </type> <within> <Location rdf:nodeID= "namerica" > <name> North America </name> <type> continent </type> </Location> </within> </Location> </within> </Location> <Person rdf:nodeID= "lucy" > <name> Lucy </name> <bornIn rdf:nodeID= "idaho" /> </Person> </rdf:RDF> RDF has a few quirks due to the fact that it is designed for internet-wide data exchange. The subject, predicate, and object of a triple are often URIs. For example, a predicate might be an URI such as <http://my-company.com/namespace#within> or <http://my-company.com/namespace#lives_in> , rather than just WITHIN or LIVES_IN . The reasoning behind this design is that you should be able to combine your data with someone else’s data, and if they attach a different meaning to the word within or lives_in , you won’t get a conflict because their predicates are actually <http://other.org/foo#within> and <http://other.org/foo#lives_in> . The URL <http://my-company.com/namespace> doesn’t necessarily need to resolve to anything — from RDF’s point of view, it is simply a namespace. To avoid potential confusion with http:// URLs, the examples in this section use non-resolvable URIs such as urn:example:within . Fortunately, you can just specify this prefix once at the top of the file, and then forget about it. The SPARQL query language SPARQL is a query language for triple-stores using the RDF data model [ 43 ]. (It is an acronym for SPARQL Protocol and RDF Query Language , pronounced “sparkle.”) It predates Cypher, and since Cypher’s pattern matching is borrowed from SPARQL, they look quite similar [ 37 ]. The same query as before — finding people who have moved from the US to Europe — is even more concise in SPARQL than it is in Cypher (see Example 2-9 ). Example 2-9. The same query as Example 2-4 , expressed in SPARQL PREFIX : < urn : example : > SELECT ? personName WHERE { ? person :name ? personName . ? person :bornIn / :within * / :name "United States" . ? person :livesIn / :within * / :name "Europe" . } The structure is very similar. The following two expressions are equivalent (variables start with a question mark in SPARQL): (person) -[:BORN_IN]-> () -[:WITHIN*0..]-> (location) # Cypher ?person :bornIn / :within* ?location. # SPARQL Because RDF doesn’t distinguish between properties and edges but just uses predicates for both, you can use the same syntax for matching properties. In the following expression, the variable usa is bound to any vertex that has a name property whose value is the string "United States" : (usa {name:'United States'}) # Cypher ?usa :name "United States". # SPARQL SPARQL is a nice query language — even if the semantic web never happens, it can be a powerful tool for applications to use internally. Graph Databases Compared to the Network Model In “Are Document Databases Repeating History?” we discussed how CODASYL and the relational model competed to solve the problem of many-to-many relationships in IMS. At first glance, CODASYL’s network model looks similar to the graph model. Are graph databases the second coming of CODASYL in disguise? No. They differ in several important ways: In CODASYL, a database had a schema that specified which record type could be nested within which other record type. In a graph database, there is no such restriction: any vertex can have an edge to any other vertex. This gives much greater flexibility for applications to adapt to changing requirements. In CODASYL, the only way to reach a particular record was to traverse one of the access paths to it. In a graph database, you can refer directly to any vertex by its unique ID, or you can use an index to find vertices with a particular value. In CODASYL, the children of a record were an ordered set, so the database had to maintain that ordering (which had consequences for the storage layout) and applications that inserted new records into the database had to worry about the positions of the new records in these sets. In a graph database, vertices and edges are not ordered (you can only sort the results when making a query). In CODASYL, all queries were imperative, difficult to write and easily broken by changes in the schema. In a graph database, you can write your traversal in imperative code if you want to, but most graph databases also support high-level, declarative query languages such as Cypher or SPARQL. The Foundation: Datalog Datalog is a much older language than SPARQL or Cypher, having been studied extensively by academics in the 1980s [ 44 , 45 , 46 ]. It is less well known among software engineers, but it is nevertheless important, because it provides the foundation that later query languages build upon. In practice, Datalog is used in a few data systems: for example, it is the query language of Datomic [ 40 ], and Cascalog [ 47 ] is a Datalog implementation for querying large datasets in Hadoop. viii Datalog’s data model is similar to the triple-store model, generalized a bit. Instead of writing a triple as ( subject , predicate , object ), we write it as predicate ( subject , object ). Example 2-10 shows how to write the data from our example in Datalog. Example 2-10. A subset of the data in Figure 2-5 , represented as Datalog facts name ( namerica , 'North America' ). type ( namerica , continent ). name ( usa , 'United States' ). type ( usa , country ). within ( usa , namerica ). name ( idaho , 'Idaho' ). type ( idaho , state ). within ( idaho , usa ). name ( lucy , 'Lucy' ). born_in ( lucy , idaho ). Now that we have defined the data, we can write the same query as before, as shown in Example 2-11 . It looks a bit different from the equivalent in Cypher or SPARQL, but don’t let that put you off. Datalog is a subset of Prolog, which you might have seen before if you’ve studied computer science. Example 2-11. The same query as Example 2-4 , expressed in Datalog within_recursive ( Location , Name ) :- name ( Location , Name ). /* Rule 1 */ within_recursive ( Location , Name ) :- within ( Location , Via ), /* Rule 2 */ within_recursive ( Via , Name ). migrated ( Name , BornIn , LivingIn ) :- name ( Person , Name ), /* Rule 3 */ born_in ( Person , BornLoc ), within_recursive ( BornLoc , BornIn ), lives_in ( Person , LivingLoc ), within_recursive ( LivingLoc , LivingIn ). ?- migrated ( Who , 'United States' , 'Europe' ). /* Who = 'Lucy'. */ Cypher and SPARQL jump in right away with SELECT , but Datalog takes a small step at a time. We define rules that tell the database about new predicates: here, we define two new predicates, within_recursive and migrated . These predicates aren’t triples stored in the database, but instead they are derived from data or from other rules. Rules can refer to other rules, just like functions can call other functions or recursively call themselves. Like this, complex queries can be built up a small piece at a time. In rules, words that start with an uppercase letter are variables, and predicates are matched like in Cypher and SPARQL. For example, name(Location, Name) matches the triple name(namerica, 'North America') with variable bindings Location = namerica and Name = 'North America' . A rule applies if the system can find a match for all predicates on the righthand side of the :- operator. When the rule applies, it’s as though the lefthand side of the :- was added to the database (with variables replaced by the values they matched). One possible way of applying the rules is thus: name(namerica, 'North America') exists in the database, so rule 1 applies. It generates within_recursive(namerica, 'North America') . within(usa, namerica) exists in the database and the previous step generated within_recursive(namerica, 'North America') , so rule 2 applies. It generates within_recursive(usa, 'North America') . within(idaho, usa) exists in the database and the previous step generated within_recursive(usa, 'North America') , so rule 2 applies. It generates within_recursive(idaho, 'North America') . By repeated application of rules 1 and 2, the within_recursive predicate can tell us all the locations in North America (or any other location name) contained in our database. This process is illustrated in Figure 2-6 . Figure 2-6. Determining that Idaho is in North America, using the Datalog rules from Example 2-11 . Now rule 3 can find people who were born in some location BornIn and live in some location LivingIn . By querying with BornIn = 'United States' and LivingIn = 'Europe' , and leaving the person as a variable Who , we ask the Datalog system to find out which values can appear for the variable Who . So, finally we get the same answer as in the earlier Cypher and SPARQL queries. The Datalog approach requires a different kind of thinking to the other query languages discussed in this chapter, but it’s a very powerful approach, because rules can be combined and reused in different queries. It’s less convenient for simple one-off queries, but it can cope better if your data is complex. Summary Data models are a huge subject, and in this chapter we have taken a quick look at a broad variety of different models. We didn’t have space to go into all the details of each model, but hopefully the overview has been enough to whet your appetite to find out more about the model that best fits your application’s requirements. Historically, data started out being represented as one big tree (the hierarchical model), but that wasn’t good for representing many-to-many relationships, so the relational model was invented to solve that problem. More recently, developers found that some applications don’t fit well in the relational model either. New nonrelational “NoSQL” datastores have diverged in two main directions: Document databases target use cases where data comes in self-contained documents and relationships between one document and another are rare. Graph databases go in the opposite direction, targeting use cases where anything is potentially related to everything. All three models (document, relational, and graph) are widely used today, and each is good in its respective domain. One model can be emulated in terms of another model — for example, graph data can be represented in a relational database — but the result is often awkward. That’s why we have different systems for different purposes, not a single one-size-fits-all solution. One thing that document and graph databases have in common is that they typically don’t enforce a schema for the data they store, which can make it easier to adapt applications to changing requirements. However, your application most likely still assumes that data has a certain structure; it’s just a question of whether the schema is explicit (enforced on write) or implicit (handled on read). Each data model comes with its own query language or framework, and we discussed several examples: SQL, MapReduce, MongoDB’s aggregation pipeline, Cypher, SPARQL, and Datalog. We also touched on CSS and XSL/XPath, which aren’t database query languages but have interesting parallels. Although we have covered a lot of ground, there are still many data models left unmentioned. To give just a few brief examples: Researchers working with genome data often need to perform sequence-similarity searches , which means taking one very long string (representing a DNA molecule) and matching it against a large database of strings that are similar, but not identical. None of the databases described here can handle this kind of usage, which is why researchers have written specialized genome database software like GenBank [ 48 ]. Particle physicists have been doing Big Data–style large-scale data analysis for decades, and projects like the Large Hadron Collider (LHC) now work with hundreds of petabytes! At such a scale custom solutions are required to stop the hardware cost from spiraling out of control [ 49 ]. Full-text search is arguably a kind of data model that is frequently used alongside databases. Information retrieval is a large specialist subject that we won’t cover in great detail in this book, but we’ll touch on search indexes in Chapter 3 and Part III . We have to leave it there for now. In the next chapter we will discuss some of the trade-offs that come into play when implementing the data models described in this chapter. Footnotes i A term borrowed from electronics. Every electric circuit has a certain impedance (resistance to alternating current) on its inputs and outputs. When you connect one circuit’s output to another one’s input, the power transfer across the connection is maximized if the output and input impedances of the two circuits match. An impedance mismatch can lead to signal reflections and other troubles. ii Literature on the relational model distinguishes several different normal forms, but the distinctions are of little practical interest. As a rule of thumb, if you’re duplicating values that could be stored in just one place, the schema is not normalized. iii At the time of writing, joins are supported in RethinkDB, not supported in MongoDB, and only supported in predeclared views in CouchDB. iv Foreign key constraints allow you to restrict modifications, but such constraints are not required by the relational model. Even with constraints, joins on foreign keys are performed at query time, whereas in CODASYL, the join was effectively done at insert time. v Codd’s original description of the relational model [ 1 ] actually allowed something quite similar to JSON documents within a relational schema. He called it nonsimple domains . The idea was that a value in a row doesn’t have to just be a primitive datatype like a number or a string, but could also be a nested relation (table) — so you can have an arbitrarily nested tree structure as a value, much like the JSON or XML support that was added to SQL over 30 years later. vi IMS and CODASYL both used imperative query APIs. Applications typically used COBOL code to iterate over records in the database, one record at a time [ 2 , 16 ]. vii Technically, Datomic uses 5-tuples rather than triples; the two additional fields are metadata for versioning. viii Datomic and Cascalog use a Clojure S-expression syntax for Datalog. In the following examples we use a Prolog syntax, which is a little easier to read, but this makes no functional difference. References [ 1 ] Edgar F. Codd: “ A Relational Model of Data for Large Shared Data Banks ,” Communications of the ACM , volume 13, number 6, pages 377–387, June 1970. doi:10.1145/362384.362685 [ 2 ] Michael Stonebraker and Joseph M. Hellerstein: “ What Goes Around Comes Around ,” in Readings in Database Systems , 4th edition, MIT Press, pages 2–41, 2005. ISBN: 978-0-262-69314-1 [ 3 ] Pramod J. Sadalage and Martin Fowler: NoSQL Distilled . Addison-Wesley, August 2012. ISBN: 978-0-321-82662-6 [ 4 ] Eric Evans: “ NoSQL: What’s in a Name? ,” blog.sym-link.com , October 30, 2009. [ 5 ] James Phillips: “ Surprises in Our NoSQL Adoption Survey ,” blog.couchbase.com , February 8, 2012. [ 6 ] Michael Wagner: SQL/XML:2006 – Evaluierung der Standardkonformität ausgewählter Datenbanksysteme . Diplomica Verlag, Hamburg, 2010. ISBN: 978-3-836-64609-3 [ 7 ] “ XML Data in SQL Server ,” SQL Server 2012 documentation, technet.microsoft.com , 2013. [ 8 ] “ PostgreSQL 9.3.1 Documentation ,” The PostgreSQL Global Development Group, 2013. [ 9 ] “ The MongoDB 2.4 Manual ,” MongoDB, Inc., 2013. [ 10 ] “ RethinkDB 1.11 Documentation ,” rethinkdb.com , 2013. [ 11 ] “ Apache CouchDB 1.6 Documentation ,” docs.couchdb.org , 2014. [ 12 ] Lin Qiao, Kapil Surlaker, Shirshanka Das, et al.: “ On Brewing Fresh Espresso: LinkedIn’s Distributed Data Serving Platform ,” at ACM International Conference on Management of Data (SIGMOD), June 2013. [ 13 ] Rick Long, Mark Harrington, Robert Hain, and Geoff Nicholls: IMS Primer . IBM Redbook SG24-5352-00, IBM International Technical Support Organization, January 2000. [ 14 ] Stephen D. Bartlett: “ IBM’s IMS — Myths, Realities, and Opportunities ,” The Clipper Group Navigator, TCG2013015LI, July 2013. [ 15 ] Sarah Mei: “ Why You Should Never Use MongoDB ,” sarahmei.com , November 11, 2013. [ 16 ] J. S. Knowles and D. M. R. Bell: “The CODASYL Model,” in Databases — Role and Structure: An Advanced Course , edited by P. M. Stocker, P. M. D. Gray, and M. P. Atkinson, pages 19–56, Cambridge University Press, 1984. ISBN: 978-0-521-25430-4 [ 17 ] Charles W. Bachman: “ The Programmer as Navigator ,” Communications of the ACM , volume 16, number 11, pages 653–658, November 1973. doi:10.1145/355611.362534 [ 18 ] Joseph M. Hellerstein, Michael Stonebraker, and James Hamilton: “ Architecture of a Database System ,” Foundations and Trends in Databases , volume 1, number 2, pages 141–259, November 2007. doi:10.1561/1900000002 [ 19 ] Sandeep Parikh and Kelly Stirman: “ Schema Design for Time Series Data in MongoDB ,” blog.mongodb.org , October 30, 2013. [ 20 ] Martin Fowler: “ Schemaless Data Structures ,” martinfowler.com , January 7, 2013. [ 21 ] Amr Awadallah: “ Schema-on-Read vs. Schema-on-Write ,” at Berkeley EECS RAD Lab Retreat , Santa Cruz, CA, May 2009. [ 22 ] Martin Odersky: “ The Trouble with Types ,” at Strange Loop , September 2013. [ 23 ] Conrad Irwin: “ MongoDB — Confessions of a PostgreSQL Lover ,” at HTML5DevConf , October 2013. [ 24 ] “ Percona Toolkit Documentation: pt-online-schema-change ,” Percona Ireland Ltd., 2013. [ 25 ] Rany Keddo, Tobias Bielohlawek, and Tobias Schmidt: “ Large Hadron Migrator ,” SoundCloud, 2013. [ 26 ] Shlomi Noach: “ gh-ost: GitHub’s Online Schema Migration Tool for MySQL ,” githubengineering.com , August 1, 2016. [ 27 ] James C. Corbett, Jeffrey Dean, Michael Epstein, et al.: “ Spanner: Google’s Globally-Distributed Database ,” at 10th USENIX Symposium on Operating System Design and Implementation (OSDI), October 2012. [ 28 ] Donald K. Burleson: “ Reduce I/O with Oracle Cluster Tables ,” dba-oracle.com . [ 29 ] Fay Chang, Jeffrey Dean, Sanjay Ghemawat, et al.: “ Bigtable: A Distributed Storage System for Structured Data ,” at 7th USENIX Symposium on Operating System Design and Implementation (OSDI), November 2006. [ 30 ] Bobbie J. Cochrane and Kathy A. McKnight: “ DB2 JSON Capabilities, Part 1: Introduction to DB2 JSON ,” IBM developerWorks, June 20, 2013. [ 31 ] Herb Sutter: “ The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software ,” Dr. Dobb’s Journal , volume 30, number 3, pages 202-210, March 2005. [ 32 ] Joseph M. Hellerstein: “ The Declarative Imperative: Experiences and Conjectures in Distributed Logic ,” Electrical Engineering and Computer Sciences, University of California at Berkeley, Tech report UCB/EECS-2010-90, June 2010. [ 33 ] Jeffrey Dean and Sanjay Ghemawat: “ MapReduce: Simplified Data Processing on Large Clusters ,” at 6th USENIX Symposium on Operating System Design and Implementation (OSDI), December 2004. [ 34 ] Craig Kerstiens: “ JavaScript in Your Postgres ,” blog.heroku.com , June 5, 2013. [ 35 ] Nathan Bronson, Zach Amsden, George Cabrera, et al.: “ TAO: Facebook’s Distributed Data Store for the Social Graph ,” at USENIX Annual Technical Conference (USENIX ATC), June 2013. [ 36 ] “ Apache TinkerPop3.2.3 Documentation ,” tinkerpop.apache.org , October 2016. [ 37 ] “ The Neo4j Manual v2.0.0 ,” Neo Technology, 2013. [ 38 ] Emil Eifrem: Twitter correspondence , January 3, 2014. [ 39 ] David Beckett and Tim Berners-Lee: “ Turtle – Terse RDF Triple Language ,” W3C Team Submission, March 28, 2011. [ 40 ] “ Datomic Development Resources ,” Metadata Partners, LLC, 2013. [ 41 ] W3C RDF Working Group: “ Resource Description Framework (RDF) ,” w3.org , 10 February 2004. [ 42 ] “ Apache Jena ,” Apache Software Foundation. [ 43 ] Steve Harris, Andy Seaborne, and Eric Prud’hommeaux: “ SPARQL 1.1 Query Language ,” W3C Recommendation, March 2013. [ 44 ] Todd J. Green, Shan Shan Huang, Boon Thau Loo, and Wenchao Zhou: “ Datalog and Recursive Query Processing ,” Foundations and Trends in Databases , volume 5, number 2, pages 105–195, November 2013. doi:10.1561/1900000017 [ 45 ] Stefano Ceri, Georg Gottlob, and Letizia Tanca: “ What You Always Wanted to Know About Datalog (And Never Dared to Ask) ,” IEEE Transactions on Knowledge and Data Engineering , volume 1, number 1, pages 146–166, March 1989. doi:10.1109/69.43410 [ 46 ] Serge Abiteboul, Richard Hull, and Victor Vianu: Foundations of Databases . Addison-Wesley, 1995. ISBN: 978-0-201-53771-0, available online at webdam.inria.fr/Alice [ 47 ] Nathan Marz: “ Cascalog ,” cascalog.org . [ 48 ] Dennis A. Benson, Ilene Karsch-Mizrachi, David J. Lipman, et al.: “ GenBank ,” Nucleic Acids Research , volume 36, Database issue, pages D25–D30, December 2007. doi:10.1093/nar/gkm929 [ 49 ] Fons Rademakers: “ ROOT for Big Data Analysis ,” at Workshop on the Future of Big Data Management , London, UK, June 2013.
text00007.html
9
Chapter 3.Storage and Retrieval
"Chapter 3.\nStorage and Retrieval\nWer Ordnung hält, ist nur zu faul zum Suchen.\n(If you keep thi(...TRUNCATED)
text00008.html
10
Chapter 4.Encoding and Evolution
"Chapter 4.\nEncoding and Evolution\nEverything changes and nothing stands still.\nHeraclitus of Eph(...TRUNCATED)
text00009.html
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
11