url
stringlengths
11
2.25k
text
stringlengths
88
50k
ts
timestamp[s]date
2026-01-13 08:47:33
2026-01-13 09:30:40
https://git-scm.com/book/ru/v2/Git-%d0%bd%d0%b0-%d1%81%d0%b5%d1%80%d0%b2%d0%b5%d1%80%d0%b5-Git-%d0%b4%d0%b5%d0%bc%d0%be%d0%bd
Git - Git-демон About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Введение 1.1 О системе контроля версий 1.2 Краткая история Git 1.3 Что такое Git? 1.4 Командная строка 1.5 Установка Git 1.6 Первоначальная настройка Git 1.7 Как получить помощь? 1.8 Заключение 2. Основы Git 2.1 Создание Git-репозитория 2.2 Запись изменений в репозиторий 2.3 Просмотр истории коммитов 2.4 Операции отмены 2.5 Работа с удалёнными репозиториями 2.6 Работа с тегами 2.7 Псевдонимы в Git 2.8 Заключение 3. Ветвление в Git 3.1 О ветвлении в двух словах 3.2 Основы ветвления и слияния 3.3 Управление ветками 3.4 Работа с ветками 3.5 Удалённые ветки 3.6 Перебазирование 3.7 Заключение 4. Git на сервере 4.1 Протоколы 4.2 Установка Git на сервер 4.3 Генерация открытого SSH ключа 4.4 Настраиваем сервер 4.5 Git-демон 4.6 Умный HTTP 4.7 GitWeb 4.8 GitLab 4.9 Git-хостинг 4.10 Заключение 5. Распределённый Git 5.1 Распределённый рабочий процесс 5.2 Участие в проекте 5.3 Сопровождение проекта 5.4 Заключение 6. GitHub 6.1 Настройка и конфигурация учётной записи 6.2 Внесение собственного вклада в проекты 6.3 Сопровождение проекта 6.4 Управление организацией 6.5 Создание сценариев GitHub 6.6 Заключение 7. Инструменты Git 7.1 Выбор ревизии 7.2 Интерактивное индексирование 7.3 Припрятывание и очистка 7.4 Подпись 7.5 Поиск 7.6 Перезапись истории 7.7 Раскрытие тайн reset 7.8 Продвинутое слияние 7.9 Rerere 7.10 Обнаружение ошибок с помощью Git 7.11 Подмодули 7.12 Создание пакетов 7.13 Замена 7.14 Хранилище учётных данных 7.15 Заключение 8. Настройка Git 8.1 Конфигурация Git 8.2 Атрибуты Git 8.3 Хуки в Git 8.4 Пример принудительной политики Git 8.5 Заключение 9. Git и другие системы контроля версий 9.1 Git как клиент 9.2 Переход на Git 9.3 Заключение 10. Git изнутри 10.1 Сантехника и Фарфор 10.2 Объекты Git 10.3 Ссылки в Git 10.4 Pack-файлы 10.5 Спецификации ссылок 10.6 Протоколы передачи данных 10.7 Обслуживание репозитория и восстановление данных 10.8 Переменные окружения 10.9 Заключение A1. Приложение A: Git в других окружениях A1.1 Графические интерфейсы A1.2 Git в Visual Studio A1.3 Git в Visual Studio Code A1.4 Git в Eclipse A1.5 Git в IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine A1.6 Git в Sublime Text A1.7 Git в Bash A1.8 Git в Zsh A1.9 Git в PowerShell A1.10 Заключение A2. Приложение B: Встраивание Git в ваши приложения A2.1 Git из командной строки A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Приложение C: Команды Git A3.1 Настройка и конфигурация A3.2 Клонирование и создание репозиториев A3.3 Основные команды A3.4 Ветвление и слияния A3.5 Совместная работа и обновление проектов A3.6 Осмотр и сравнение A3.7 Отладка A3.8 Внесение исправлений A3.9 Работа с помощью электронной почты A3.10 Внешние системы A3.11 Администрирование A3.12 Низкоуровневые команды 2nd Edition 4.5 Git на сервере - Git-демон Git-демон Далее мы установим демон, обслуживающий репозитории по протоколу «Git». Это широко распространённый вариант для быстрого доступа без аутентификации. Помните, что раз сервис — без аутентификации, всё, что обслуживается по этому протоколу — публично доступно в сети. Если вы запускаете демон на сервере не за сетевым экраном, он должен использоваться только для проектов, которые публично видны внешнему миру. Если сервер находится за вашим сетевым экраном, вы можете использовать его для проектов, к которым большое число людей или компьютеров (серверов непрерывной интеграции или сборки) должно иметь доступ только на чтение, и если вы не хотите для каждого из них заводить SSH-ключ. В любом случае, протокол Git относительно просто настроить. Упрощённо, вам нужно запустить следующую команду в демонизированной форме: $ git daemon --reuseaddr --base-path=/srv/git/ /srv/git/ Опция --reuseaddr позволит серверу перезапуститься без ожидания таймаута существующих подключений, --base-path позволит людям не указывать полный путь, чтобы клонировать проект, а путь в конце указывает демону Git где искать экспортируемые репозитории. Если у вас запущен сетевой экран, вы должны проколоть в нём дырочку, открыв порт 9418 на машине, где всё это запущено. Вы можете демонизировать этот процесс несколькими путями, в зависимости от операционной системы. Так как systemd является самой распространённой системой инициализации в современных дистрибутивах Linux, вы можете использовать именно её. Просто создайте файл в каталоге /etc/systemd/system/git-daemon.service со следующим содержанием: [Unit] Description=Start Git Daemon [Service] ExecStart=/usr/bin/git daemon --reuseaddr --base-path=/srv/git/ /srv/git/ Restart=always RestartSec=500ms StandardOutput=syslog StandardError=syslog SyslogIdentifier=git-daemon User=git Group=git [Install] WantedBy=multi-user.target Как вы могли здесь заметить, Git демон запускается от имени git , как пользователя, так и группы. При необходимости укажите другие значения и убедитесь, что указанный пользователь существует в системе. Так же убедитесь, что исполняемый файл Git имеет путь /usr/bin/git или укажите соответствующий путь к нему. Наконец, выполните команду systemctl enable git-daemon для запуска сервиса при старте системы; для ручного запуска и остановки сервиса используйте команды systemctl start git-daemon и systemctl stop git-daemon соответственно. На других системах вы можете использовать xinetd , сценарий вашей системы sysvinit , или что-то другое — главное, чтобы вы могли эту команду как-то демонизировать и присматривать за ней. Затем нужно указать Git серверу к каким репозиториям предоставлять доступ без аутентификации. Вы можете сделать это для каждого репозитория, создав файл с именем git-daemon-export-ok . $ cd /path/to/project.git $ touch git-daemon-export-ok Наличие этого файла указывает Git, что можно обслуживать этот проект без аутентификации. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:20
https://sre.google/careers/
Google SRE - Find your next SRE Job role and sre positions Site Reliability Engineering Jump to Content Home Resources Latest resources Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Distributed PubSub Distributed Image Server The Art of SLO Latest resources Resources overview Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Books overview Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa Mobaa overview 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Classroom overview Distributed PubSub Distributed Image Server The Art of SLO Books Careers Cloud Local Prodcast Spotlight Site Reliability Engineering Jump to Content Hear from some of our most senior engineers about their role at Google. Anna Berenberg Google Fellow Learn more Watch now Anna Berenberg Google Fellow On my career path: "There's no boring days here. I've grown up so much, and I'm eternally grateful for people who have helped me along the way to learn, who became sponsors and mentors. Nothing could have been done alone: execs, friends." What keeps you at Google? "The technology — I'm an infrastructure engineer, and there are only a few places that have this scale and this complexity. Google has the biggest infrastructure in the world, and I can challenge myself here. The second reason is the people: I have the privilege of being surrounded by good people, and I treasure that!" Interviews have been condensed and edited for clarity. Jeffrey Snover Distinguished Engineer Learn more Jeffrey Snover Distinguished Engineer Being a Distinguished Engineer means you need to walk this tightrope between what everyone knows is possible and what you know could be done. When you get more senior, it's about judgment that produces business results. If I had a friend who was thinking about joining, I would tell them: "This is a technology playground with awesome people." Interviews have been condensed and edited for clarity. Thais Melo Principal Engineer Learn more Thais Melo Principal Engineer I initially came to Google as a Software Engineer on a Development team. After a few years, I did a rotation in the SRE organization, and I was so impressed by the culture and approach to engineering challenges that I made the transfer permanent. One of the benefits of SRE teams approaching operating systems as a software engineering problem, is that SRE teams are relatively small compared to the partner development teams. This is great for career growth, as SREs end up working with more senior partners in the development organization. Learning incident response from some of the best folks in the industry, and getting to apply those skills in high-impact situations to keep our users safe has also been a rewarding experience. I am currently the Tech Lead for Cloud Compute SRE. My time is split between nurturing a strong and diverse engineering culture, being part of the team that responds to serious outages of Google services, and building reliable, elastic, efficient, global-scale Compute infrastructure. My main engineering focus is improving Cloud Compute scalability and efficiency: to enable our users to grow their business and never worry about the Compute infrastructure behind it. Getting that done using resources efficiently is a really challenging combination of system engineering problems in various parts of the technology stack, and means working across multiple organizations and job roles. Interviews have been condensed and edited for clarity. Contact the Google Recruiting Team to learn more about becoming a Principal Engineer, Distinguished Engineer, or Fellow at Google. Learn about Site Reliability Engineering Careers Hear about the SRE Employee Experience Explore SRE leadership positions at Google Learn about Site Reliability Engineering vs Software Engineering roles at Google 1 2 3 Interested in joining SRE? Google strives to cultivate an inclusive workplace. We believe diversity of perspectives and ideas leads to better discussions, decisions, and outcomes for everyone. Explore SRE opportunities at Google Follow us About Google Google products Privacy Terms Help
2026-01-13T09:29:20
http://www.trello.com/integrations
Power-Up Your Productivity with Trello Integrations | Trello Skip to main content Features Solutions Plans Pricing Resources Explore the features that help your team succeed Inbox Capture every vital detail from emails, Slack, and more directly into your Trello Inbox. Planner Sync your calendar and allocate focused time slots to boost productivity. Automation Automate tasks and workflows with Trello. Power-Ups Power up your teams by linking their favorite tools with Trello plugins. Templates Give your team a blueprint for success with easy-to-use templates from industry leaders and the Trello community. Integrations Find the apps your team is already using or discover new ways to get work done in Trello. Meet Trello Trello makes it easy for your team to get work done. No matter the project, workflow, or type of team, Trello can help keep things organized. It’s simple – sign-up, create a board, and you’re off! Productivity awaits. Check out Trello Take a page out of these pre-built Trello playbooks designed for all teams Marketing teams Whether launching a new product, campaign, or creating content, Trello helps marketing teams succeed. Product management Use Trello’s management boards and roadmap features to simplify complex projects and processes. Engineering teams Ship more code, faster, and give your developers the freedom to be more agile with Trello. Design teams Empower your design teams by using Trello to streamline creative requests and promote more fluid cross-team collaboration. Startups From hitting revenue goals to managing workflows, small businesses thrive with Trello. Remote teams Keep your remote team connected and motivated, no matter where they’re located around the world. See all teams Our product in action Use case: Task management Track progress of tasks in one convenient place with a visual layout that adds ‘ta-da’ to your to-do’s. Use case: Resource hub Save hours when you give teams a well-designed hub to find information easily and quickly. Use case: Project management Keep projects organized, deadlines on track, and teammates aligned with Trello. See all use cases Standard For teams that need to manage more work and scale collaboration. Premium Best for teams up to 100 that need to track multiple projects and visualize work in a variety of ways. Enterprise Everything your enterprise teams and admins need to manage projects. Free plan For individuals or small teams looking to keep work organized. Take a tour of Trello Compare plans & pricing Whether you’re a team of 2 or 2,000, Trello’s flexible pricing model means you only pay for what you need. View Trello pricing Learn & connect Trello guide Our easy to follow workflow guide will take you from project set-up to Trello expert in no time. Remote work guide The complete guide to setting up your team for remote work success. Webinars Enjoy our free Trello webinars and become a productivity professional. Customer stories See how businesses have adopted Trello as a vital part of their workflow. Developers The sky's the limit in what you can deliver to Trello users in your Power-Up! Help resources Need help? Articles and FAQs to get you unstuck. Back Navigation Features Solutions Plans Pricing Resources Explore the features that help your team succeed Inbox Capture every vital detail from emails, Slack, and more directly into your Trello Inbox. Planner Sync your calendar and allocate focused time slots to boost productivity. Automation Automate tasks and workflows with Trello. Power-Ups Power up your teams by linking their favorite tools with Trello plugins. Templates Give your team a blueprint for success with easy-to-use templates from industry leaders and the Trello community. Integrations Find the apps your team is already using or discover new ways to get work done in Trello. Meet Trello Trello makes it easy for your team to get work done. No matter the project, workflow, or type of team, Trello can help keep things organized. It’s simple – sign-up, create a board, and you’re off! Productivity awaits. Check out Trello Take a page out of these pre-built Trello playbooks designed for all teams Marketing teams Whether launching a new product, campaign, or creating content, Trello helps marketing teams succeed. Product management Use Trello’s management boards and roadmap features to simplify complex projects and processes. Engineering teams Ship more code, faster, and give your developers the freedom to be more agile with Trello. Design teams Empower your design teams by using Trello to streamline creative requests and promote more fluid cross-team collaboration. Startups From hitting revenue goals to managing workflows, small businesses thrive with Trello. Remote teams Keep your remote team connected and motivated, no matter where they’re located around the world. See all teams Our product in action Read though our use cases to make the most of Trello on your team. See all use cases Standard For teams that need to manage more work and scale collaboration. Premium Best for teams up to 100 that need to track multiple projects and visualize work in a variety of ways. Enterprise Everything your enterprise teams and admins need to manage projects. Free plan For individuals or small teams looking to keep work organized. Take a tour of Trello Compare plans & pricing Whether you’re a team of 2 or 2,000, Trello’s flexible pricing model means you only pay for what you need. View Trello pricing Learn & connect Trello guide Our easy to follow workflow guide will take you from project set-up to Trello expert in no time. Remote work guide The complete guide to setting up your team for remote work success. Webinars Enjoy our free Trello webinars and become a productivity professional. Customer stories See how businesses have adopted Trello as a vital part of their workflow. Developers The sky's the limit in what you can deliver to Trello users in your Power-Up! Help resources Need help? Articles and FAQs to get you unstuck. Helping teams work better, together Discover Trello use cases, productivity tips, best practices for team collaboration, and expert remote work advice. Check out the Trello blog Connect Trello to everything Find the apps your team is already using or discover new ways to get work done in Trello. Filter by: All integrations Filter by: All integrations Analytics & reporting Automation Board utilities Communication & collaboration Developer tools File management HR & operations IT & project management Marketing & social media Product & design Sales & support Power-Ups allow you to vote, track, attach files, share designs, and much more, right in your Trello boards. Check out our Power-Ups Build integrations that connect your apps to Trello and millions of users. Learn about Trello for developers Filter by: All integrations Analytics & reporting Automation Board utilities Communication & collaboration Developer tools File management HR & operations IT & project management Marketing & social media Product & design Sales & support Power-Ups allow you to vote, track, attach files, share designs, and much more, right in your Trello boards. Check out our Power-Ups Build integrations that connect your apps to Trello and millions of users. Learn about Trello for developers Featured integrations Slack Power-Up Share cards and activity, pin a Slack channel to a board. Enable JIRA Cloud Easily connect Trello cards to JIRA issues so you can see real-time progress at a glance. Enable Miro Easily attach and create new Miro Boards without having to leave Trello. Enable Log In About Trello What’s behind the boards. Jobs Learn about open roles on the Trello team. Apps Download the Trello App for your Desktop or Mobile devices. Contact us Need anything? Get in touch and we can help. Čeština Deutsch English Español Français Italiano Magyar Nederlands Norsk (bokmål) Polski Português (Brasil) Suomi Svenska Tiếng Việt Türkçe Русский Українська ภาษาไทย 中文 (简体) 中文 (繁體) 日本語 Notice at Collection Privacy Policy Terms Copyright © 2024 Atlassian
2026-01-13T09:29:20
https://sre.google/sre-book/automation-at-google#id-bb2urI2FkCQ
Google SRE - Google Automation For Reliability Chapter 7 - The Evolution of Automation at Google Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolution of Automation at Google Written by Niall Murphy with John Looney and Michael Kacirek Edited by Betsy Beyer Besides black art, there is only automation and mechanization. Federico García Lorca (1898–1936), Spanish poet and playwright For SRE, automation is a force multiplier, not a panacea. Of course, just multiplying force does not naturally change the accuracy of where that force is applied: doing automation thoughtlessly can create as many problems as it solves. Therefore, while we believe that software-based automation is superior to manual operation in most circumstances, better than either option is a higher-level system design requiring neither of them—an autonomous system. Or to put it another way, the value of automation comes from both what it does and its judicious application. We’ll discuss both the value of automation and how our attitude has evolved over time. The Value of Automation What exactly is the value of automation? 26 Consistency Although scale is an obvious motivation for automation, there are many other reasons to use it. Take the example of university computing systems, where many systems engineering folks started their careers. Systems administrators of that background were generally charged with running a collection of machines or some software, and were accustomed to manually performing various actions in the discharge of that duty. One common example is creating user accounts; others include purely operational duties like making sure backups happen, managing server failover, and small data manipulations like changing the upstream DNS servers’ resolv.conf , DNS server zone data, and similar activities. Ultimately, however, this prevalence of manual tasks is unsatisfactory for both the organizations and indeed the people maintaining systems in this way. For a start, any action performed by a human or humans hundreds of times won’t be performed the same way each time: even with the best will in the world, very few of us will ever be as consistent as a machine. This inevitable lack of consistency leads to mistakes, oversights, issues with data quality, and, yes, reliability problems. In this domain—the execution of well-scoped, known procedures the value of consistency is in many ways the primary value of automation. A Platform Automation doesn’t just provide consistency. Designed and done properly, automatic systems also provide a platform that can be extended, applied to more systems, or perhaps even spun out for profit. 27 (The alternative, no automation, is neither cost effective nor extensible: it is instead a tax levied on the operation of a system.) A platform also centralizes mistakes . In other words, a bug fixed in the code will be fixed there once and forever, unlike a sufficiently large set of humans performing the same procedure, as discussed previously. A platform can be extended to perform additional tasks more easily than humans can be instructed to perform them (or sometimes even realize that they have to be done). Depending on the nature of the task, it can run either continuously or much more frequently than humans could appropriately accomplish the task, or at times that are inconvenient for humans. Furthermore, a platform can export metrics about its performance, or otherwise allow you to discover details about your process you didn’t know previously, because these details are more easily measurable within the context of a platform. Faster Repairs There’s an additional benefit for systems where automation is used to resolve common faults in a system (a frequent situation for SRE-created automation). If automation runs regularly and successfully enough, the result is a reduced mean time to repair (MTTR) for those common faults. You can then spend your time on other tasks instead, thereby achieving increased developer velocity because you don’t have to spend time either preventing a problem or (more commonly) cleaning up after it. As is well understood in the industry, the later in the product lifecycle a problem is discovered, the more expensive it is to fix; see Testing for Reliability . Generally, problems that occur in actual production are most expensive to fix, both in terms of time and money, which means that an automated system looking for problems as soon as they arise has a good chance of lowering the total cost of the system, given that the system is sufficiently large. Faster Action In the infrastructural situations where SRE automation tends to be deployed, humans don’t usually react as fast as machines. In most common cases, where, for example, failover or traffic switching can be well defined for a particular application, it makes no sense to effectively require a human to intermittently press a button called “Allow system to continue to run.” (Yes, it is true that sometimes automatic procedures can end up making a bad situation worse, but that is why such procedures should be scoped over well-defined domains.) Google has a large amount of automation; in many cases, the services we support could not long survive without this automation because they crossed the threshold of manageable manual operation long ago. Time Saving Finally, time saving is an oft-quoted rationale for automation. Although people cite this rationale for automation more than the others, in many ways the benefit is often less immediately calculable. Engineers often waver over whether a particular piece of automation or code is worth writing, in terms of effort saved in not requiring a task to be performed manually versus the effort required to write it. 28 It’s easy to overlook the fact that once you have encapsulated some task in automation, anyone can execute the task. Therefore, the time savings apply across anyone who would plausibly use the automation. Decoupling operator from operation is very powerful. Joseph Bironas, an SRE who led Google’s datacenter turnup efforts for a time, forcefully argued: "If we are engineering processes and solutions that are not automatable, we continue having to staff humans to maintain the system. If we have to staff humans to do the work, we are feeding the machines with the blood, sweat, and tears of human beings. Think The Matrix with less special effects and more pissed off System Administrators." The Value for Google SRE All of these benefits and trade-offs apply to us just as much as anyone else, and Google does have a strong bias toward automation. Part of our preference for automation springs from our particular business challenges: the products and services we look after are planet-spanning in scale, and we don’t typically have time to engage in the same kind of machine or service hand-holding common in other organizations. 29 For truly large services, the factors of consistency, quickness, and reliability dominate most conversations about the trade-offs of performing automation. Another argument in favor of automation, particularly in the case of Google, is our complicated yet surprisingly uniform production environment, described in The Production Environment at Google, from the Viewpoint of an SRE . While other organizations might have an important piece of equipment without a readily accessible API, software for which no source code is available, or another impediment to complete control over production operations, Google generally avoids such scenarios. We have built APIs for systems when no API was available from the vendor. Even though purchasing software for a particular task would have been much cheaper in the short term, we chose to write our own solutions, because doing so produced APIs with the potential for much greater long-term benefits. We spent a lot of time overcoming obstacles to automatic system management, and then resolutely developed that automatic system management itself. Given how Google manages its source code [Pot16] , the availability of that code for more or less any system that SRE touches also means that our mission to “own the product in production” is much easier because we control the entirety of the stack. Of course, although Google is ideologically bent upon using machines to manage machines where possible, reality requires some modification of our approach. It isn’t appropriate to automate every component of every system, and not everyone has the ability or inclination to develop automation at a particular time. Some essential systems started out as quick prototypes, not designed to last or to interface with automation. The previous paragraphs state a maximalist view of our position, but one that we have been broadly successful at putting into action within the Google context. In general, we have chosen to create platforms where we could, or to position ourselves so that we could create platforms over time. We view this platform-based approach as necessary for manageability and scalability. The Use Cases for Automation In the industry, automation is the term generally used for writing code to solve a wide variety of problems, although the motivations for writing this code, and the solutions themselves, are often quite different. More broadly, in this view, automation is “meta-software”—software to act on software. As we implied earlier, there are a number of use cases for automation. Here is a non-exhaustive list of examples: User account creation Cluster turnup and turndown for services Software or hardware installation preparation and decommissioning Rollouts of new software versions Runtime configuration changes A special case of runtime config changes: changes to your dependencies This list could continue essentially ad infinitum . Google SRE’s Use Cases for Automation In Google, we have all of the use cases just listed, and more. However, within Google SRE, our primary affinity has typically been for running infrastructure, as opposed to managing the quality of the data that passes over that infrastructure. This line isn’t totally clear—for example, we care deeply if half of a dataset vanishes after a push, and therefore we alert on coarse-grain differences like this, but it’s rare for us to write the equivalent of changing the properties of some arbitrary subset of accounts on a system. Therefore, the context for our automation is often automation to manage the lifecycle of systems, not their data: for example, deployments of a service in a new cluster. To this extent, SRE’s automation efforts are not far off what many other people and organizations do, except that we use different tools to manage it and have a different focus (as we’ll discuss). Widely available tools like Puppet, Chef, cfengine, and even Perl, which all provide ways to automate particular tasks, differ mostly in terms of the level of abstraction of the components provided to help the act of automating. A full language like Perl provides POSIX-level affordances, which in theory provide an essentially unlimited scope of automation across the APIs accessible to the system, 30 whereas Chef and Puppet provide out-of-the-box abstractions with which services or other higher-level entities can be manipulated. The trade-off here is classic: higher-level abstractions are easier to manage and reason about, but when you encounter a “leaky abstraction,” you fail systemically, repeatedly, and potentially inconsistently. For example, we often assume that pushing a new binary to a cluster is atomic; the cluster will either end up with the old version, or the new version. However, real-world behavior is more complicated: that cluster’s network can fail halfway through; machines can fail; communication to the cluster management layer can fail, leaving the system in an inconsistent state; depending on the situation, new binaries could be staged but not pushed, or pushed but not restarted, or restarted but not verifiable. Very few abstractions model these kinds of outcomes successfully, and most generally end up halting themselves and calling for intervention. Truly bad automation systems don’t even do that. SRE has a number of philosophies and products in the domain of automation, some of which look more like generic rollout tools without particularly detailed modeling of higher-level entities, and some of which look more like languages for describing service deployment (and so on) at a very abstract level. Work done in the latter tends to be more reusable and be more of a common platform than the former, but the complexity of our production environment sometimes means that the former approach is the most immediately tractable option. A Hierarchy of Automation Classes Although all of these automation steps are valuable, and indeed an automation platform is valuable in and of itself, in an ideal world, we wouldn’t need externalized automation. In fact, instead of having a system that has to have external glue logic, it would be even better to have a system that needs no glue logic at all , not just because internalization is more efficient (although such efficiency is useful), but because it has been designed to not need glue logic in the first place. Accomplishing that involves taking the use cases for glue logic—generally “first order” manipulations of a system, such as adding accounts or performing system turnup—and finding a way to handle those use cases directly within the application. As a more detailed example, most turnup automation at Google is problematic because it ends up being maintained separately from the core system and therefore suffers from “bit rot,” i.e., not changing when the underlying systems change. Despite the best of intentions, attempting to more tightly couple the two (turnup automation and the core system) often fails due to unaligned priorities, as product developers will, not unreasonably, resist a test deployment requirement for every change. Secondly, automation that is crucial but only executed at infrequent intervals and therefore difficult to test is often particularly fragile because of the extended feedback cycle. Cluster failover is one classic example of infrequently executed automation: failovers might only occur every few months, or infrequently enough that inconsistencies between instances are introduced. The evolution of automation follows a path: 1) No automation Database master is failed over manually between locations. 2) Externally maintained system-specific automation An SRE has a failover script in his or her home directory. 3) Externally maintained generic automation The SRE adds database support to a "generic failover" script that everyone uses. 4) Internally maintained system-specific automation The database ships with its own failover script. 5) Systems that don’t need any automation The database notices problems, and automatically fails over without human intervention. SRE hates manual operations, so we obviously try to create systems that don’t require them. However, sometimes manual operations are unavoidable. There is additionally a subvariety of automation that applies changes not across the domain of specific system-related configuration, but across the domain of production as a whole. In a highly centralized proprietary production environment like Google’s, there are a large number of changes that have a non–service-specific scope—e.g., changing upstream Chubby servers, a flag change to the Bigtable client library to make access more reliable, and so on—which nonetheless need to be safely managed and rolled back if necessary. Beyond a certain volume of changes, it is infeasible for production-wide changes to be accomplished manually, and at some time before that point, it’s a waste to have manual oversight for a process where a large proportion of the changes are either trivial or accomplished successfully by basic relaunch-and-check strategies. Let’s use internal case studies to illustrate some of the preceding points in detail. The first case study is about how, due to some diligent, far-sighted work, we managed to achieve the self-professed nirvana of SRE: to automate ourselves out of a job. Automate Yourself Out of a Job: Automate ALL the Things! For a long while, the Ads products at Google stored their data in a MySQL database. Because Ads data obviously has high reliability requirements, an SRE team was charged with looking after that infrastructure. From 2005 to 2008, the Ads Database mostly ran in what we considered to be a mature and managed state. For example, we had automated away the worst, but not all, of the routine work for standard replica replacements. We believed the Ads Database was well managed and that we had harvested most of the low-hanging fruit in terms of optimization and scale. However, as daily operations became comfortable, team members began to look at the next level of system development: migrating MySQL onto Google’s cluster scheduling system, Borg. We hoped this migration would provide two main benefits: Completely eliminate machine/replica maintenance: Borg would automatically handle the setup/restart of new and broken tasks. Enable bin-packing of multiple MySQL instances on the same physical machine: Borg would enable more efficient use of machine resources via Containers. In late 2008, we successfully deployed a proof of concept MySQL instance on Borg. Unfortunately, this was accompanied by a significant new difficulty. A core operating characteristic of Borg is that its tasks move around automatically. Tasks commonly move within Borg as frequently as once or twice per week. This frequency was tolerable for our database replicas, but unacceptable for our masters. At that time, the process for master failover took 30–90 minutes per instance. Simply because we ran on shared machines and were subject to reboots for kernel upgrades, in addition to the normal rate of machine failure, we had to expect a number of otherwise unrelated failovers every week. This factor, in combination with the number of shards on which our system was hosted, meant that: Manual failovers would consume a substantial amount of human hours and would give us best-case availability of 99% uptime, which fell short of the actual business requirements of the product. In order to meet our error budgets, each failover would have to take less than 30 seconds of downtime. There was no way to optimize a human-dependent procedure to make downtime shorter than 30 seconds. Therefore, our only choice was to automate failover. Actually, we needed to automate more than just failover. In 2009 Ads SRE completed our automated failover daemon, which we dubbed “Decider.” Decider could complete MySQL failovers for both planned and unplanned failovers in less than 30 seconds 95% of the time. With the creation of Decider, MySQL on Borg (MoB) finally became a reality. We graduated from optimizing our infrastructure for a lack of failover to embracing the idea that failure is inevitable, and therefore optimizing to recover quickly through automation. While automation let us achieve highly available MySQL in a world that forced up to two restarts per week, it did come with its own set of costs. All of our applications had to be changed to include significantly more failure-handling logic than before. Given that the norm in the MySQL development world is to assume that the MySQL instance will be the most stable component in the stack, this switch meant customizing software like JDBC to be more tolerant of our failure-prone environment. However, the benefits of migrating to MoB with Decider were well worth these costs. Once on MoB, the time our team spent on mundane operational tasks dropped by 95%. Our failovers were automated, so an outage of a single database task no longer paged a human. The main upshot of this new automation was that we had a lot more free time to spend on improving other parts of the infrastructure. Such improvements had a cascading effect: the more time we saved, the more time we were able to spend on optimizing and automating other tedious work. Eventually, we were able to automate schema changes, causing the cost of total operational maintenance of the Ads Database to drop by nearly 95%. Some might say that we had successfully automated ourselves out of this job. The hardware side of our domain also saw improvement. Migrating to MoB freed up considerable resources because we could schedule multiple MySQL instances on the same machines, which improved utilization of our hardware. In total, we were able to free up about 60% of our hardware. Our team was now flush with hardware and engineering resources. This example demonstrates the wisdom of going the extra mile to deliver a platform rather than replacing existing manual procedures. The next example comes from the cluster infrastructure group, and illustrates some of the more difficult trade-offs you might encounter on your way to automating all the things. Soothing the Pain: Applying Automation to Cluster Turnups Ten years ago, the Cluster Infrastructure SRE team seemed to get a new hire every few months. As it turned out, that was approximately the same frequency at which we turned up a new cluster. Because turning up a service in a new cluster gives new hires exposure to a service’s internals, this task seemed like a natural and useful training tool. The steps taken to get a cluster ready for use were something like the following: Fit out a datacenter building for power and cooling. Install and configure core switches and connections to the backbone. Install a few initial racks of servers. Configure basic services such as DNS and installers, then configure a lock service, storage, and computing. Deploy the remaining racks of machines. Assign user-facing services resources, so their teams can set up the services. Steps 4 and 6 were extremely complex. While basic services like DNS are relatively simple, the storage and compute subsystems at that time were still in heavy development, so new flags, components, and optimizations were added weekly. Some services had more than a hundred different component subsystems, each with a complex web of dependencies. Failing to configure one subsystem, or configuring a system or component differently than other deployments, is a customer-impacting outage waiting to happen. In one case, a multi-petabyte Bigtable cluster was configured to not use the first (logging) disk on 12-disk systems, for latency reasons. A year later, some automation assumed that if a machine’s first disk wasn’t being used, that machine didn’t have any storage configured; therefore, it was safe to wipe the machine and set it up from scratch. All of the Bigtable data was wiped, instantly. Thankfully we had multiple real-time replicas of the dataset, but such surprises are unwelcome. Automation needs to be careful about relying on implicit "safety" signals. Early automation focused on accelerating cluster delivery. This approach tended to rely upon creative use of SSH for tedious package distribution and service initialization problems. This strategy was an initial win, but those free-form scripts became a cholesterol of technical debt. Detecting Inconsistencies with Prodtest As the numbers of clusters grew, some clusters required hand-tuned flags and settings. As a result, teams wasted more and more time chasing down difficult-to-spot misconfigurations. If a flag that made GFS more responsive to log processing leaked into the default templates, cells with many files could run out of memory under load. Infuriating and time-consuming misconfigurations crept in with nearly every large configuration change. The creative—though brittle—shell scripts we used to configure clusters were neither scaling to the number of people who wanted to make changes nor to the sheer number of cluster permutations that needed to be built. These shell scripts also failed to resolve more significant concerns before declaring that a service was good to take customer-facing traffic, such as: Were all of the service’s dependencies available and correctly configured? Were all configurations and packages consistent with other deployments? Could the team confirm that every configuration exception was desired? Prodtest (Production Test) was an ingenious solution to these unwelcome surprises. We extended the Python unit test framework to allow for unit testing of real-world services. These unit tests have dependencies, allowing a chain of tests, and a failure in one test would quickly abort. Take the test shown in Figure 7-1 as an example. Figure 7-1. ProdTest for DNS Service, showing how one failed test aborts the subsequent chain of tests A given team’s Prodtest was given the cluster name, and it could validate that team’s services in that cluster. Later additions allowed us to generate a graph of the unit tests and their states. This functionality allowed an engineer to see quickly if their service was correctly configured in all clusters, and if not, why. The graph highlighted the failed step, and the failing Python unit test output a more verbose error message. Any time a team encountered a delay due to another team’s unexpected misconfiguration, a bug could be filed to extend their Prodtest. This ensured that a similar problem would be discovered earlier in the future. SREs were proud to be able to assure their customers that all services—both newly turned up services and existing services with new configuration—would reliably serve production traffic. For the first time, our project managers could predict when a cluster could "go live," and had a complete understanding of why each clusters took six or more weeks to go from "network-ready" to "serving live traffic." Out of the blue, SRE received a mission from senior management: In three months, five new clusters will reach network-ready on the same day. Please turn them up in one week. Resolving Inconsistencies Idempotently A "One Week Turnup" was a terrifying mission. We had tens of thousands of lines of shell script owned by dozens of teams. We could quickly tell how unprepared any given cluster was, but fixing it meant that the dozens of teams would have to file hundreds of bugs, and then we had to hope that these bugs would be promptly fixed. We realized that evolving from "Python unit tests finding misconfigurations" to "Python code fixing misconfigurations" could enable us to fix these issues faster. The unit test already knew which cluster we were examining and the specific test that was failing, so we paired each test with a fix. If each fix was written to be idempotent, and could assume that all dependencies were met, resolving the problem should have been easy—and safe—to resolve. Requiring idempotent fixes meant teams could run their "fix script" every 15 minutes without fearing damage to the cluster’s configuration. If the DNS team’s test was blocked on the Machine Database team’s configuration of a new cluster, as soon as the cluster appeared in the database, the DNS team’s tests and fixes would start working. Take the test shown in Figure 7-2 as an example. If TestDnsMonitoringConfigExists fails, as shown, we can call FixDnsMonitoringCreateConfig , which scrapes configuration from a database, then checks a skeleton configuration file into our revision control system. Then TestDnsMonitoringConfigExists passes on retry, and the TestDnsMonitoringConfigPushed test can be attempted. If the test fails, the FixDnsMonitoringPushConfig step runs. If a fix fails multiple times, the automation assumes that the fix failed and stops, notifying the user. Armed with these scripts, a small group of engineers could ensure that we could go from "The network works, and machines are listed in the database" to "Serving 1% of websearch and ads traffic" in a matter of a week or two. At the time, this seemed to be the apex of automation technology. Looking back, this approach was deeply flawed; the latency between the test, the fix, and then a second test introduced flaky tests that sometimes worked and sometimes failed. Not all fixes were naturally idempotent, so a flaky test that was followed by a fix might render the system in an inconsistent state. Figure 7-2. ProdTest for DNS Service, showing that one failed test resulted in only running one fix The Inclination to Specialize Automation processes can vary in three respects: Competence , i.e., their accuracy Latency , how quickly all steps are executed when initiated Relevance , or proportion of real-world process covered by automation We began with a process that was highly competent (maintained and run by the service owners), high-latency (the service owners performed the process in their spare time or assigned it to new engineers), and very relevant (the service owners knew when the real world changed, and could fix the automation). To reduce turnup latency, many service owning teams instructed a single "turnup team" what automation to run. The turnup team used tickets to start each stage in the turnup so that we could track the remaining tasks, and who those tasks were assigned to. If the human interactions regarding automation modules occurred between people in the same room, cluster turnups could happen in a much shorter time. Finally, we had our competent, accurate, and timely automation process! But this state didn’t last long. The real world is chaotic: software, configuration, data, etc. changed, resulting in over a thousand separate changes a day to affected systems. The people most affected by automation bugs were no longer domain experts, so the automation became less relevant (meaning that new steps were missed) and less competent (new flags might have caused automation to fail). However, it took a while for this drop in quality to impact velocity. Automation code, like unit test code, dies when the maintaining team isn’t obsessive about keeping the code in sync with the codebase it covers. The world changes around the code: the DNS team adds new configuration options, the storage team changes their package names, and the networking team needs to support new devices. By relieving teams who ran services of the responsibility to maintain and run their automation code, we created ugly organizational incentives: A team whose primary task is to speed up the current turnup has no incentive to reduce the technical debt of the service-owning team running the service in production later. A team not running automation has no incentive to build systems that are easy to automate. A product manager whose schedule is not affected by low-quality automation will always prioritize new features over simplicity and automation. The most functional tools are usually written by those who use them. A similar argument applies to why product development teams benefit from keeping at least some operational awareness of their systems in production. Turnups were again high-latency, inaccurate, and incompetent—the worst of all worlds. However, an unrelated security mandate allowed us out of this trap. Much of distributed automation relied at that time on SSH. This is clumsy from a security perspective, because people must have root on many machines to run most commands. A growing awareness of advanced, persistent security threats drove us to reduce the privileges SREs enjoyed to the absolute minimum they needed to do their jobs. We had to replace our use of sshd with an authenticated, ACL-driven, RPC-based Local Admin Daemon, also known as Admin Servers, which had permissions to perform those local changes. As a result, no one could install or modify a server without an audit trail. Changes to the Local Admin Daemon and the Package Repo were gated on code reviews, making it very difficult for someone to exceed their authority; giving someone the access to install packages would not let them view colocated logs. The Admin Server logged the RPC requestor, any parameters, and the results of all RPCs to enhance debugging and security audits. Service-Oriented Cluster-Turnup In the next iteration, Admin Servers became part of service teams’ workflows, both as related to the machine-specific Admin Servers (for installing packages and rebooting) and cluster-level Admin Servers (for actions like draining or turning up a service). SREs moved from writing shell scripts in their home directories to building peer-reviewed RPC servers with fine-grained ACLs. Later on, after the realization that turnup processes had to be owned by the teams that owned the services fully sank in, we saw this as a way to approach cluster turnup as a Service-Oriented Architecture (SOA) problem: service owners would be responsible for creating an Admin Server to handle cluster turnup/turndown RPCs, sent by the system that knew when clusters were ready. In turn, each team would provide the contract (API) that the turnup automation needed, while still being free to change the underlying implementation. As a cluster reached "network-ready," automation sent an RPC to each Admin Server that played a part in turning up the cluster. We now have a low-latency, competent, and accurate process; most importantly, this process has stayed strong as the rate of change, the number of teams, and the number of services seem to double each year. As mentioned earlier, our evolution of turnup automation followed a path: Operator-triggered manual action (no automation) Operator-written, system-specific automation Externally maintained generic automation Internally maintained, system-specific automation Autonomous systems that need no human intervention While this evolution has, broadly speaking, been a success, the Borg case study illustrates another way we have come to think of the problem of automation. Borg: Birth of the Warehouse-Scale Computer Another way to understand the development of our attitude toward automation, and when and where that automation is best deployed, is to consider the history of the development of our cluster management systems. 31 Like MySQL on Borg, which demonstrated the success of converting manual operations to automatic ones, and the cluster turnup process, which demonstrated the downside of not thinking carefully enough about where and how automation was implemented, developing cluster management also ended up demonstrating another lesson about how automation should be done. Like our previous two examples, something quite sophisticated was created as the eventual result of continuous evolution from simpler beginnings. Google’s clusters were initially deployed much like everyone else’s small networks of the time: racks of machines with specific purposes and heterogeneous configurations. Engineers would log in to some well-known “master” machine to perform administrative tasks; “golden” binaries and configuration lived on these masters. As we had only one colo provider, most naming logic implicitly assumed that location. As production grew, and we began to use multiple clusters, different domains (cluster names) entered the picture. It became necessary to have a file describing what each machine did, which grouped machines under some loose naming strategy. This descriptor file, in combination with the equivalent of a parallel SSH, allowed us to reboot (for example) all the search machines in one go. Around this time, it was common to get tickets like “search is done with machine x1 , crawl can have the machine now.” Automation development began. Initially automation consisted of simple Python scripts for operations such as the following: Service management: keeping services running (e.g., restarts after segfaults) Tracking what services were supposed to run on which machines Log message parsing: SSHing into each machine and looking for regexps Automation eventually mutated into a proper database that tracked machine state, and also incorporated more sophisticated monitoring tools. With the union set of the automation available, we could now automatically manage much of the lifecycle of machines: noticing when machines were broken, removing the services, sending them to repair, and restoring the configuration when they came back from repair. But to take a step back, this automation was useful yet profoundly limited, due to the fact that abstractions of the system were relentlessly tied to physical machines. We needed a new approach, hence Borg [Ver15] was born: a system that moved away from the relatively static host/port/job assignments of the previous world, toward treating a collection of machines as a managed sea of resources. Central to its success—and its conception—was the notion of turning cluster management into an entity for which API calls could be issued, to some central coordinator. This liberated extra dimensions of efficiency, flexibility, and reliability: unlike the previous model of machine “ownership,” Borg could allow machines to schedule, for example, batch and user-facing tasks on the same machine. This functionality ultimately resulted in continuous and automatic operating system upgrades with a very small amount of constant 32 effort—effort that does not scale with the total size of production deployments. Slight deviations in machine state are now automatically fixed; brokenness and lifecycle management are essentially no-ops for SRE at this point. Thousands of machines are born, die, and go into repairs daily with no SRE effort. To echo the words of Ben Treynor Sloss: by taking the approach that this was a software problem, the initial automation bought us enough time to turn cluster management into something autonomous, as opposed to automated. We achieved this goal by bringing ideas related to data distribution, APIs, hub-and-spoke architectures, and classic distributed system software development to bear upon the domain of infrastructure management. An interesting analogy is possible here: we can make a direct mapping between the single machine case and the development of cluster management abstractions. In this view, rescheduling on another machine looks a lot like a process moving from one CPU to another: of course, those compute resources happen to be at the other end of a network link, but to what extent does that actually matter? Thinking in these terms, rescheduling looks like an intrinsic feature of the system rather than something one would “automate”—humans couldn’t react fast enough anyway. Similarly in the case of cluster turnup: in this metaphor, cluster turnup is simply additional schedulable capacity, a bit like adding disk or RAM to a single computer. However, a single-node computer is not, in general, expected to continue operating when a large number of components fail. The global computer is—it must be self-repairing to operate once it grows past a certain size, due to the essentially statistically guaranteed large number of failures taking place every second. This implies that as we move systems up the hierarchy from manually triggered, to automatically triggered, to autonomous, some capacity for self-introspection is necessary to survive. Reliability Is the Fundamental Feature Of course, for effective troubleshooting, the details of internal operation that the introspection relies upon should also be exposed to the humans managing the overall system. Analogous discussions about the impact of automation in the noncomputer domain—for example, in airplane flight 33 or industrial applications—often point out the downside of highly effective automation: 34 human operators are progressively more relieved of useful direct contact with the system as the automation covers more and more daily activities over time. Inevitably, then, a situation arises in which the automation fails, and the humans are now unable to successfully operate the system. The fluidity of their reactions has been lost due to lack of practice, and their mental models of what the system should be doing no longer reflect the reality of what it is doing. 35 This situation arises more when the system is nonautonomous—i.e., where automation replaces manual actions, and the manual actions are presumed to be always performable and available just as they were before. Sadly, over time, this ultimately becomes false: those manual actions are not always performable because the functionality to permit them no longer exists. We, too, have experienced situations where automation has been actively harmful on a number of occasions—see Automation: Enabling Failure at Scale —but in Google’s experience, there are more systems for which automation or autonomous behavior are no longer optional extras. As you scale, this is of course the case, but there are still strong arguments for more autonomous behavior of systems irrespective of size. Reliability is the fundamental feature, and autonomous, resilient behavior is one useful way to get that. Recommendations You might read the examples in this chapter and decide that you need to be Google-scale before you have anything to do with automation whatsoever. This is untrue, for two reasons: automation provides more than just time saving, so it’s worth implementing in more cases than a simple time-expended versus time-saved calculation might suggest. But the approach with the highest leverage actually occurs in the design phase: shipping and iterating rapidly might allow you to implement functionality faster, yet rarely makes for a resilient system. Autonomous operation is difficult to convincingly retrofit to sufficiently large systems, but standard good practices in software engineering will help considerably: having decoupled subsystems, introducing APIs, minimizing side effects, and so on. Automation: Enabling Failure at Scale Google runs over a dozen of its own large datacenters, but we also depend on machines in many third-party colocation facilities (or "colos"). Our machines in these colos are used to terminate most incoming connections, or as a cache for our own Content Delivery Network, in order to lower end-user latency. At any point in time, a number of these racks are being installed or decommissioned; both of these processes are largely automated. One step during decommission involves overwriting the full content of the disk of all the machines in the rack, after which point an independent system verifies the successful erase. We call this process "Diskerase." Once upon a time, the automation in charge of decommissioning a particular rack failed, but only after the Diskerase step had completed successfully. Later, the decommission process was restarted from the beginning, to debug the failure. On that iteration, when trying to send the set of machines in the rack to Diskerase, the automation determined that the set of machines that still needed to be Diskerased was (correctly) empty. Unfortunately, the empty set was used as a special value, interpreted to mean "everything." This means the automation sent almost all the machines we have in all colos to Diskerase. Within minutes, the highly efficient Diskerase wiped the disks on all machines in our CDN, and the machines were no longer able to terminate connections from users (or do anything else useful). We were still able to serve all the users from our own datacenters, and after a few minutes the only effect visible externally was a slight increase in latency. As far as we could tell, very few users noticed the problem at all, thanks to good capacity planning (at least we got that right!). Meanwhile, we spent the better part of two days reinstalling the machines in the affected colo racks; then we spent the following weeks auditing and adding more sanity checks—including rate limiting—into our automation, and making our decommission workflow idempotent. 26 For readers who already feel they precisely understand the value of automation, skip ahead to The Value for Google SRE . However, note that our description contains some nuances that might be useful to keep in mind while reading the rest of the chapter. 27 The expertise acquired in building such automation is also valuable in itself; engineers both deeply understand the existing processes they have automated and can later automate novel processes more quickly. 28 See the following XKCD cartoon: https://xkcd.com/1205/ . 29 See, for example, https://blog.engineyard.com/2014/pets-vs-cattle . 30 Of course, not every system that needs to be managed actually provides callable APIs for management—forcing some tooling to use, e.g., CLI invocations or automated website clicks. 31 We have compressed and simplified this history to aid understanding. 32 As in a small, unchanging number. 33 See, e.g., https://en.wikipedia.org/wiki/Air_France_Flight_447 . 34 See, e.g., [Bai83] and [Sar97] . 35 This is yet another good reason for regular practice drills; see Disaster Role Playing . Previous Chapter 6 - Monitoring Distributed Systems Next Chapter 8 - Release Engineering Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:20
https://sre.google/sre-book/automation-at-google#id-0vYuXSLhBTYiK
Google SRE - Google Automation For Reliability Chapter 7 - The Evolution of Automation at Google Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolution of Automation at Google Written by Niall Murphy with John Looney and Michael Kacirek Edited by Betsy Beyer Besides black art, there is only automation and mechanization. Federico García Lorca (1898–1936), Spanish poet and playwright For SRE, automation is a force multiplier, not a panacea. Of course, just multiplying force does not naturally change the accuracy of where that force is applied: doing automation thoughtlessly can create as many problems as it solves. Therefore, while we believe that software-based automation is superior to manual operation in most circumstances, better than either option is a higher-level system design requiring neither of them—an autonomous system. Or to put it another way, the value of automation comes from both what it does and its judicious application. We’ll discuss both the value of automation and how our attitude has evolved over time. The Value of Automation What exactly is the value of automation? 26 Consistency Although scale is an obvious motivation for automation, there are many other reasons to use it. Take the example of university computing systems, where many systems engineering folks started their careers. Systems administrators of that background were generally charged with running a collection of machines or some software, and were accustomed to manually performing various actions in the discharge of that duty. One common example is creating user accounts; others include purely operational duties like making sure backups happen, managing server failover, and small data manipulations like changing the upstream DNS servers’ resolv.conf , DNS server zone data, and similar activities. Ultimately, however, this prevalence of manual tasks is unsatisfactory for both the organizations and indeed the people maintaining systems in this way. For a start, any action performed by a human or humans hundreds of times won’t be performed the same way each time: even with the best will in the world, very few of us will ever be as consistent as a machine. This inevitable lack of consistency leads to mistakes, oversights, issues with data quality, and, yes, reliability problems. In this domain—the execution of well-scoped, known procedures the value of consistency is in many ways the primary value of automation. A Platform Automation doesn’t just provide consistency. Designed and done properly, automatic systems also provide a platform that can be extended, applied to more systems, or perhaps even spun out for profit. 27 (The alternative, no automation, is neither cost effective nor extensible: it is instead a tax levied on the operation of a system.) A platform also centralizes mistakes . In other words, a bug fixed in the code will be fixed there once and forever, unlike a sufficiently large set of humans performing the same procedure, as discussed previously. A platform can be extended to perform additional tasks more easily than humans can be instructed to perform them (or sometimes even realize that they have to be done). Depending on the nature of the task, it can run either continuously or much more frequently than humans could appropriately accomplish the task, or at times that are inconvenient for humans. Furthermore, a platform can export metrics about its performance, or otherwise allow you to discover details about your process you didn’t know previously, because these details are more easily measurable within the context of a platform. Faster Repairs There’s an additional benefit for systems where automation is used to resolve common faults in a system (a frequent situation for SRE-created automation). If automation runs regularly and successfully enough, the result is a reduced mean time to repair (MTTR) for those common faults. You can then spend your time on other tasks instead, thereby achieving increased developer velocity because you don’t have to spend time either preventing a problem or (more commonly) cleaning up after it. As is well understood in the industry, the later in the product lifecycle a problem is discovered, the more expensive it is to fix; see Testing for Reliability . Generally, problems that occur in actual production are most expensive to fix, both in terms of time and money, which means that an automated system looking for problems as soon as they arise has a good chance of lowering the total cost of the system, given that the system is sufficiently large. Faster Action In the infrastructural situations where SRE automation tends to be deployed, humans don’t usually react as fast as machines. In most common cases, where, for example, failover or traffic switching can be well defined for a particular application, it makes no sense to effectively require a human to intermittently press a button called “Allow system to continue to run.” (Yes, it is true that sometimes automatic procedures can end up making a bad situation worse, but that is why such procedures should be scoped over well-defined domains.) Google has a large amount of automation; in many cases, the services we support could not long survive without this automation because they crossed the threshold of manageable manual operation long ago. Time Saving Finally, time saving is an oft-quoted rationale for automation. Although people cite this rationale for automation more than the others, in many ways the benefit is often less immediately calculable. Engineers often waver over whether a particular piece of automation or code is worth writing, in terms of effort saved in not requiring a task to be performed manually versus the effort required to write it. 28 It’s easy to overlook the fact that once you have encapsulated some task in automation, anyone can execute the task. Therefore, the time savings apply across anyone who would plausibly use the automation. Decoupling operator from operation is very powerful. Joseph Bironas, an SRE who led Google’s datacenter turnup efforts for a time, forcefully argued: "If we are engineering processes and solutions that are not automatable, we continue having to staff humans to maintain the system. If we have to staff humans to do the work, we are feeding the machines with the blood, sweat, and tears of human beings. Think The Matrix with less special effects and more pissed off System Administrators." The Value for Google SRE All of these benefits and trade-offs apply to us just as much as anyone else, and Google does have a strong bias toward automation. Part of our preference for automation springs from our particular business challenges: the products and services we look after are planet-spanning in scale, and we don’t typically have time to engage in the same kind of machine or service hand-holding common in other organizations. 29 For truly large services, the factors of consistency, quickness, and reliability dominate most conversations about the trade-offs of performing automation. Another argument in favor of automation, particularly in the case of Google, is our complicated yet surprisingly uniform production environment, described in The Production Environment at Google, from the Viewpoint of an SRE . While other organizations might have an important piece of equipment without a readily accessible API, software for which no source code is available, or another impediment to complete control over production operations, Google generally avoids such scenarios. We have built APIs for systems when no API was available from the vendor. Even though purchasing software for a particular task would have been much cheaper in the short term, we chose to write our own solutions, because doing so produced APIs with the potential for much greater long-term benefits. We spent a lot of time overcoming obstacles to automatic system management, and then resolutely developed that automatic system management itself. Given how Google manages its source code [Pot16] , the availability of that code for more or less any system that SRE touches also means that our mission to “own the product in production” is much easier because we control the entirety of the stack. Of course, although Google is ideologically bent upon using machines to manage machines where possible, reality requires some modification of our approach. It isn’t appropriate to automate every component of every system, and not everyone has the ability or inclination to develop automation at a particular time. Some essential systems started out as quick prototypes, not designed to last or to interface with automation. The previous paragraphs state a maximalist view of our position, but one that we have been broadly successful at putting into action within the Google context. In general, we have chosen to create platforms where we could, or to position ourselves so that we could create platforms over time. We view this platform-based approach as necessary for manageability and scalability. The Use Cases for Automation In the industry, automation is the term generally used for writing code to solve a wide variety of problems, although the motivations for writing this code, and the solutions themselves, are often quite different. More broadly, in this view, automation is “meta-software”—software to act on software. As we implied earlier, there are a number of use cases for automation. Here is a non-exhaustive list of examples: User account creation Cluster turnup and turndown for services Software or hardware installation preparation and decommissioning Rollouts of new software versions Runtime configuration changes A special case of runtime config changes: changes to your dependencies This list could continue essentially ad infinitum . Google SRE’s Use Cases for Automation In Google, we have all of the use cases just listed, and more. However, within Google SRE, our primary affinity has typically been for running infrastructure, as opposed to managing the quality of the data that passes over that infrastructure. This line isn’t totally clear—for example, we care deeply if half of a dataset vanishes after a push, and therefore we alert on coarse-grain differences like this, but it’s rare for us to write the equivalent of changing the properties of some arbitrary subset of accounts on a system. Therefore, the context for our automation is often automation to manage the lifecycle of systems, not their data: for example, deployments of a service in a new cluster. To this extent, SRE’s automation efforts are not far off what many other people and organizations do, except that we use different tools to manage it and have a different focus (as we’ll discuss). Widely available tools like Puppet, Chef, cfengine, and even Perl, which all provide ways to automate particular tasks, differ mostly in terms of the level of abstraction of the components provided to help the act of automating. A full language like Perl provides POSIX-level affordances, which in theory provide an essentially unlimited scope of automation across the APIs accessible to the system, 30 whereas Chef and Puppet provide out-of-the-box abstractions with which services or other higher-level entities can be manipulated. The trade-off here is classic: higher-level abstractions are easier to manage and reason about, but when you encounter a “leaky abstraction,” you fail systemically, repeatedly, and potentially inconsistently. For example, we often assume that pushing a new binary to a cluster is atomic; the cluster will either end up with the old version, or the new version. However, real-world behavior is more complicated: that cluster’s network can fail halfway through; machines can fail; communication to the cluster management layer can fail, leaving the system in an inconsistent state; depending on the situation, new binaries could be staged but not pushed, or pushed but not restarted, or restarted but not verifiable. Very few abstractions model these kinds of outcomes successfully, and most generally end up halting themselves and calling for intervention. Truly bad automation systems don’t even do that. SRE has a number of philosophies and products in the domain of automation, some of which look more like generic rollout tools without particularly detailed modeling of higher-level entities, and some of which look more like languages for describing service deployment (and so on) at a very abstract level. Work done in the latter tends to be more reusable and be more of a common platform than the former, but the complexity of our production environment sometimes means that the former approach is the most immediately tractable option. A Hierarchy of Automation Classes Although all of these automation steps are valuable, and indeed an automation platform is valuable in and of itself, in an ideal world, we wouldn’t need externalized automation. In fact, instead of having a system that has to have external glue logic, it would be even better to have a system that needs no glue logic at all , not just because internalization is more efficient (although such efficiency is useful), but because it has been designed to not need glue logic in the first place. Accomplishing that involves taking the use cases for glue logic—generally “first order” manipulations of a system, such as adding accounts or performing system turnup—and finding a way to handle those use cases directly within the application. As a more detailed example, most turnup automation at Google is problematic because it ends up being maintained separately from the core system and therefore suffers from “bit rot,” i.e., not changing when the underlying systems change. Despite the best of intentions, attempting to more tightly couple the two (turnup automation and the core system) often fails due to unaligned priorities, as product developers will, not unreasonably, resist a test deployment requirement for every change. Secondly, automation that is crucial but only executed at infrequent intervals and therefore difficult to test is often particularly fragile because of the extended feedback cycle. Cluster failover is one classic example of infrequently executed automation: failovers might only occur every few months, or infrequently enough that inconsistencies between instances are introduced. The evolution of automation follows a path: 1) No automation Database master is failed over manually between locations. 2) Externally maintained system-specific automation An SRE has a failover script in his or her home directory. 3) Externally maintained generic automation The SRE adds database support to a "generic failover" script that everyone uses. 4) Internally maintained system-specific automation The database ships with its own failover script. 5) Systems that don’t need any automation The database notices problems, and automatically fails over without human intervention. SRE hates manual operations, so we obviously try to create systems that don’t require them. However, sometimes manual operations are unavoidable. There is additionally a subvariety of automation that applies changes not across the domain of specific system-related configuration, but across the domain of production as a whole. In a highly centralized proprietary production environment like Google’s, there are a large number of changes that have a non–service-specific scope—e.g., changing upstream Chubby servers, a flag change to the Bigtable client library to make access more reliable, and so on—which nonetheless need to be safely managed and rolled back if necessary. Beyond a certain volume of changes, it is infeasible for production-wide changes to be accomplished manually, and at some time before that point, it’s a waste to have manual oversight for a process where a large proportion of the changes are either trivial or accomplished successfully by basic relaunch-and-check strategies. Let’s use internal case studies to illustrate some of the preceding points in detail. The first case study is about how, due to some diligent, far-sighted work, we managed to achieve the self-professed nirvana of SRE: to automate ourselves out of a job. Automate Yourself Out of a Job: Automate ALL the Things! For a long while, the Ads products at Google stored their data in a MySQL database. Because Ads data obviously has high reliability requirements, an SRE team was charged with looking after that infrastructure. From 2005 to 2008, the Ads Database mostly ran in what we considered to be a mature and managed state. For example, we had automated away the worst, but not all, of the routine work for standard replica replacements. We believed the Ads Database was well managed and that we had harvested most of the low-hanging fruit in terms of optimization and scale. However, as daily operations became comfortable, team members began to look at the next level of system development: migrating MySQL onto Google’s cluster scheduling system, Borg. We hoped this migration would provide two main benefits: Completely eliminate machine/replica maintenance: Borg would automatically handle the setup/restart of new and broken tasks. Enable bin-packing of multiple MySQL instances on the same physical machine: Borg would enable more efficient use of machine resources via Containers. In late 2008, we successfully deployed a proof of concept MySQL instance on Borg. Unfortunately, this was accompanied by a significant new difficulty. A core operating characteristic of Borg is that its tasks move around automatically. Tasks commonly move within Borg as frequently as once or twice per week. This frequency was tolerable for our database replicas, but unacceptable for our masters. At that time, the process for master failover took 30–90 minutes per instance. Simply because we ran on shared machines and were subject to reboots for kernel upgrades, in addition to the normal rate of machine failure, we had to expect a number of otherwise unrelated failovers every week. This factor, in combination with the number of shards on which our system was hosted, meant that: Manual failovers would consume a substantial amount of human hours and would give us best-case availability of 99% uptime, which fell short of the actual business requirements of the product. In order to meet our error budgets, each failover would have to take less than 30 seconds of downtime. There was no way to optimize a human-dependent procedure to make downtime shorter than 30 seconds. Therefore, our only choice was to automate failover. Actually, we needed to automate more than just failover. In 2009 Ads SRE completed our automated failover daemon, which we dubbed “Decider.” Decider could complete MySQL failovers for both planned and unplanned failovers in less than 30 seconds 95% of the time. With the creation of Decider, MySQL on Borg (MoB) finally became a reality. We graduated from optimizing our infrastructure for a lack of failover to embracing the idea that failure is inevitable, and therefore optimizing to recover quickly through automation. While automation let us achieve highly available MySQL in a world that forced up to two restarts per week, it did come with its own set of costs. All of our applications had to be changed to include significantly more failure-handling logic than before. Given that the norm in the MySQL development world is to assume that the MySQL instance will be the most stable component in the stack, this switch meant customizing software like JDBC to be more tolerant of our failure-prone environment. However, the benefits of migrating to MoB with Decider were well worth these costs. Once on MoB, the time our team spent on mundane operational tasks dropped by 95%. Our failovers were automated, so an outage of a single database task no longer paged a human. The main upshot of this new automation was that we had a lot more free time to spend on improving other parts of the infrastructure. Such improvements had a cascading effect: the more time we saved, the more time we were able to spend on optimizing and automating other tedious work. Eventually, we were able to automate schema changes, causing the cost of total operational maintenance of the Ads Database to drop by nearly 95%. Some might say that we had successfully automated ourselves out of this job. The hardware side of our domain also saw improvement. Migrating to MoB freed up considerable resources because we could schedule multiple MySQL instances on the same machines, which improved utilization of our hardware. In total, we were able to free up about 60% of our hardware. Our team was now flush with hardware and engineering resources. This example demonstrates the wisdom of going the extra mile to deliver a platform rather than replacing existing manual procedures. The next example comes from the cluster infrastructure group, and illustrates some of the more difficult trade-offs you might encounter on your way to automating all the things. Soothing the Pain: Applying Automation to Cluster Turnups Ten years ago, the Cluster Infrastructure SRE team seemed to get a new hire every few months. As it turned out, that was approximately the same frequency at which we turned up a new cluster. Because turning up a service in a new cluster gives new hires exposure to a service’s internals, this task seemed like a natural and useful training tool. The steps taken to get a cluster ready for use were something like the following: Fit out a datacenter building for power and cooling. Install and configure core switches and connections to the backbone. Install a few initial racks of servers. Configure basic services such as DNS and installers, then configure a lock service, storage, and computing. Deploy the remaining racks of machines. Assign user-facing services resources, so their teams can set up the services. Steps 4 and 6 were extremely complex. While basic services like DNS are relatively simple, the storage and compute subsystems at that time were still in heavy development, so new flags, components, and optimizations were added weekly. Some services had more than a hundred different component subsystems, each with a complex web of dependencies. Failing to configure one subsystem, or configuring a system or component differently than other deployments, is a customer-impacting outage waiting to happen. In one case, a multi-petabyte Bigtable cluster was configured to not use the first (logging) disk on 12-disk systems, for latency reasons. A year later, some automation assumed that if a machine’s first disk wasn’t being used, that machine didn’t have any storage configured; therefore, it was safe to wipe the machine and set it up from scratch. All of the Bigtable data was wiped, instantly. Thankfully we had multiple real-time replicas of the dataset, but such surprises are unwelcome. Automation needs to be careful about relying on implicit "safety" signals. Early automation focused on accelerating cluster delivery. This approach tended to rely upon creative use of SSH for tedious package distribution and service initialization problems. This strategy was an initial win, but those free-form scripts became a cholesterol of technical debt. Detecting Inconsistencies with Prodtest As the numbers of clusters grew, some clusters required hand-tuned flags and settings. As a result, teams wasted more and more time chasing down difficult-to-spot misconfigurations. If a flag that made GFS more responsive to log processing leaked into the default templates, cells with many files could run out of memory under load. Infuriating and time-consuming misconfigurations crept in with nearly every large configuration change. The creative—though brittle—shell scripts we used to configure clusters were neither scaling to the number of people who wanted to make changes nor to the sheer number of cluster permutations that needed to be built. These shell scripts also failed to resolve more significant concerns before declaring that a service was good to take customer-facing traffic, such as: Were all of the service’s dependencies available and correctly configured? Were all configurations and packages consistent with other deployments? Could the team confirm that every configuration exception was desired? Prodtest (Production Test) was an ingenious solution to these unwelcome surprises. We extended the Python unit test framework to allow for unit testing of real-world services. These unit tests have dependencies, allowing a chain of tests, and a failure in one test would quickly abort. Take the test shown in Figure 7-1 as an example. Figure 7-1. ProdTest for DNS Service, showing how one failed test aborts the subsequent chain of tests A given team’s Prodtest was given the cluster name, and it could validate that team’s services in that cluster. Later additions allowed us to generate a graph of the unit tests and their states. This functionality allowed an engineer to see quickly if their service was correctly configured in all clusters, and if not, why. The graph highlighted the failed step, and the failing Python unit test output a more verbose error message. Any time a team encountered a delay due to another team’s unexpected misconfiguration, a bug could be filed to extend their Prodtest. This ensured that a similar problem would be discovered earlier in the future. SREs were proud to be able to assure their customers that all services—both newly turned up services and existing services with new configuration—would reliably serve production traffic. For the first time, our project managers could predict when a cluster could "go live," and had a complete understanding of why each clusters took six or more weeks to go from "network-ready" to "serving live traffic." Out of the blue, SRE received a mission from senior management: In three months, five new clusters will reach network-ready on the same day. Please turn them up in one week. Resolving Inconsistencies Idempotently A "One Week Turnup" was a terrifying mission. We had tens of thousands of lines of shell script owned by dozens of teams. We could quickly tell how unprepared any given cluster was, but fixing it meant that the dozens of teams would have to file hundreds of bugs, and then we had to hope that these bugs would be promptly fixed. We realized that evolving from "Python unit tests finding misconfigurations" to "Python code fixing misconfigurations" could enable us to fix these issues faster. The unit test already knew which cluster we were examining and the specific test that was failing, so we paired each test with a fix. If each fix was written to be idempotent, and could assume that all dependencies were met, resolving the problem should have been easy—and safe—to resolve. Requiring idempotent fixes meant teams could run their "fix script" every 15 minutes without fearing damage to the cluster’s configuration. If the DNS team’s test was blocked on the Machine Database team’s configuration of a new cluster, as soon as the cluster appeared in the database, the DNS team’s tests and fixes would start working. Take the test shown in Figure 7-2 as an example. If TestDnsMonitoringConfigExists fails, as shown, we can call FixDnsMonitoringCreateConfig , which scrapes configuration from a database, then checks a skeleton configuration file into our revision control system. Then TestDnsMonitoringConfigExists passes on retry, and the TestDnsMonitoringConfigPushed test can be attempted. If the test fails, the FixDnsMonitoringPushConfig step runs. If a fix fails multiple times, the automation assumes that the fix failed and stops, notifying the user. Armed with these scripts, a small group of engineers could ensure that we could go from "The network works, and machines are listed in the database" to "Serving 1% of websearch and ads traffic" in a matter of a week or two. At the time, this seemed to be the apex of automation technology. Looking back, this approach was deeply flawed; the latency between the test, the fix, and then a second test introduced flaky tests that sometimes worked and sometimes failed. Not all fixes were naturally idempotent, so a flaky test that was followed by a fix might render the system in an inconsistent state. Figure 7-2. ProdTest for DNS Service, showing that one failed test resulted in only running one fix The Inclination to Specialize Automation processes can vary in three respects: Competence , i.e., their accuracy Latency , how quickly all steps are executed when initiated Relevance , or proportion of real-world process covered by automation We began with a process that was highly competent (maintained and run by the service owners), high-latency (the service owners performed the process in their spare time or assigned it to new engineers), and very relevant (the service owners knew when the real world changed, and could fix the automation). To reduce turnup latency, many service owning teams instructed a single "turnup team" what automation to run. The turnup team used tickets to start each stage in the turnup so that we could track the remaining tasks, and who those tasks were assigned to. If the human interactions regarding automation modules occurred between people in the same room, cluster turnups could happen in a much shorter time. Finally, we had our competent, accurate, and timely automation process! But this state didn’t last long. The real world is chaotic: software, configuration, data, etc. changed, resulting in over a thousand separate changes a day to affected systems. The people most affected by automation bugs were no longer domain experts, so the automation became less relevant (meaning that new steps were missed) and less competent (new flags might have caused automation to fail). However, it took a while for this drop in quality to impact velocity. Automation code, like unit test code, dies when the maintaining team isn’t obsessive about keeping the code in sync with the codebase it covers. The world changes around the code: the DNS team adds new configuration options, the storage team changes their package names, and the networking team needs to support new devices. By relieving teams who ran services of the responsibility to maintain and run their automation code, we created ugly organizational incentives: A team whose primary task is to speed up the current turnup has no incentive to reduce the technical debt of the service-owning team running the service in production later. A team not running automation has no incentive to build systems that are easy to automate. A product manager whose schedule is not affected by low-quality automation will always prioritize new features over simplicity and automation. The most functional tools are usually written by those who use them. A similar argument applies to why product development teams benefit from keeping at least some operational awareness of their systems in production. Turnups were again high-latency, inaccurate, and incompetent—the worst of all worlds. However, an unrelated security mandate allowed us out of this trap. Much of distributed automation relied at that time on SSH. This is clumsy from a security perspective, because people must have root on many machines to run most commands. A growing awareness of advanced, persistent security threats drove us to reduce the privileges SREs enjoyed to the absolute minimum they needed to do their jobs. We had to replace our use of sshd with an authenticated, ACL-driven, RPC-based Local Admin Daemon, also known as Admin Servers, which had permissions to perform those local changes. As a result, no one could install or modify a server without an audit trail. Changes to the Local Admin Daemon and the Package Repo were gated on code reviews, making it very difficult for someone to exceed their authority; giving someone the access to install packages would not let them view colocated logs. The Admin Server logged the RPC requestor, any parameters, and the results of all RPCs to enhance debugging and security audits. Service-Oriented Cluster-Turnup In the next iteration, Admin Servers became part of service teams’ workflows, both as related to the machine-specific Admin Servers (for installing packages and rebooting) and cluster-level Admin Servers (for actions like draining or turning up a service). SREs moved from writing shell scripts in their home directories to building peer-reviewed RPC servers with fine-grained ACLs. Later on, after the realization that turnup processes had to be owned by the teams that owned the services fully sank in, we saw this as a way to approach cluster turnup as a Service-Oriented Architecture (SOA) problem: service owners would be responsible for creating an Admin Server to handle cluster turnup/turndown RPCs, sent by the system that knew when clusters were ready. In turn, each team would provide the contract (API) that the turnup automation needed, while still being free to change the underlying implementation. As a cluster reached "network-ready," automation sent an RPC to each Admin Server that played a part in turning up the cluster. We now have a low-latency, competent, and accurate process; most importantly, this process has stayed strong as the rate of change, the number of teams, and the number of services seem to double each year. As mentioned earlier, our evolution of turnup automation followed a path: Operator-triggered manual action (no automation) Operator-written, system-specific automation Externally maintained generic automation Internally maintained, system-specific automation Autonomous systems that need no human intervention While this evolution has, broadly speaking, been a success, the Borg case study illustrates another way we have come to think of the problem of automation. Borg: Birth of the Warehouse-Scale Computer Another way to understand the development of our attitude toward automation, and when and where that automation is best deployed, is to consider the history of the development of our cluster management systems. 31 Like MySQL on Borg, which demonstrated the success of converting manual operations to automatic ones, and the cluster turnup process, which demonstrated the downside of not thinking carefully enough about where and how automation was implemented, developing cluster management also ended up demonstrating another lesson about how automation should be done. Like our previous two examples, something quite sophisticated was created as the eventual result of continuous evolution from simpler beginnings. Google’s clusters were initially deployed much like everyone else’s small networks of the time: racks of machines with specific purposes and heterogeneous configurations. Engineers would log in to some well-known “master” machine to perform administrative tasks; “golden” binaries and configuration lived on these masters. As we had only one colo provider, most naming logic implicitly assumed that location. As production grew, and we began to use multiple clusters, different domains (cluster names) entered the picture. It became necessary to have a file describing what each machine did, which grouped machines under some loose naming strategy. This descriptor file, in combination with the equivalent of a parallel SSH, allowed us to reboot (for example) all the search machines in one go. Around this time, it was common to get tickets like “search is done with machine x1 , crawl can have the machine now.” Automation development began. Initially automation consisted of simple Python scripts for operations such as the following: Service management: keeping services running (e.g., restarts after segfaults) Tracking what services were supposed to run on which machines Log message parsing: SSHing into each machine and looking for regexps Automation eventually mutated into a proper database that tracked machine state, and also incorporated more sophisticated monitoring tools. With the union set of the automation available, we could now automatically manage much of the lifecycle of machines: noticing when machines were broken, removing the services, sending them to repair, and restoring the configuration when they came back from repair. But to take a step back, this automation was useful yet profoundly limited, due to the fact that abstractions of the system were relentlessly tied to physical machines. We needed a new approach, hence Borg [Ver15] was born: a system that moved away from the relatively static host/port/job assignments of the previous world, toward treating a collection of machines as a managed sea of resources. Central to its success—and its conception—was the notion of turning cluster management into an entity for which API calls could be issued, to some central coordinator. This liberated extra dimensions of efficiency, flexibility, and reliability: unlike the previous model of machine “ownership,” Borg could allow machines to schedule, for example, batch and user-facing tasks on the same machine. This functionality ultimately resulted in continuous and automatic operating system upgrades with a very small amount of constant 32 effort—effort that does not scale with the total size of production deployments. Slight deviations in machine state are now automatically fixed; brokenness and lifecycle management are essentially no-ops for SRE at this point. Thousands of machines are born, die, and go into repairs daily with no SRE effort. To echo the words of Ben Treynor Sloss: by taking the approach that this was a software problem, the initial automation bought us enough time to turn cluster management into something autonomous, as opposed to automated. We achieved this goal by bringing ideas related to data distribution, APIs, hub-and-spoke architectures, and classic distributed system software development to bear upon the domain of infrastructure management. An interesting analogy is possible here: we can make a direct mapping between the single machine case and the development of cluster management abstractions. In this view, rescheduling on another machine looks a lot like a process moving from one CPU to another: of course, those compute resources happen to be at the other end of a network link, but to what extent does that actually matter? Thinking in these terms, rescheduling looks like an intrinsic feature of the system rather than something one would “automate”—humans couldn’t react fast enough anyway. Similarly in the case of cluster turnup: in this metaphor, cluster turnup is simply additional schedulable capacity, a bit like adding disk or RAM to a single computer. However, a single-node computer is not, in general, expected to continue operating when a large number of components fail. The global computer is—it must be self-repairing to operate once it grows past a certain size, due to the essentially statistically guaranteed large number of failures taking place every second. This implies that as we move systems up the hierarchy from manually triggered, to automatically triggered, to autonomous, some capacity for self-introspection is necessary to survive. Reliability Is the Fundamental Feature Of course, for effective troubleshooting, the details of internal operation that the introspection relies upon should also be exposed to the humans managing the overall system. Analogous discussions about the impact of automation in the noncomputer domain—for example, in airplane flight 33 or industrial applications—often point out the downside of highly effective automation: 34 human operators are progressively more relieved of useful direct contact with the system as the automation covers more and more daily activities over time. Inevitably, then, a situation arises in which the automation fails, and the humans are now unable to successfully operate the system. The fluidity of their reactions has been lost due to lack of practice, and their mental models of what the system should be doing no longer reflect the reality of what it is doing. 35 This situation arises more when the system is nonautonomous—i.e., where automation replaces manual actions, and the manual actions are presumed to be always performable and available just as they were before. Sadly, over time, this ultimately becomes false: those manual actions are not always performable because the functionality to permit them no longer exists. We, too, have experienced situations where automation has been actively harmful on a number of occasions—see Automation: Enabling Failure at Scale —but in Google’s experience, there are more systems for which automation or autonomous behavior are no longer optional extras. As you scale, this is of course the case, but there are still strong arguments for more autonomous behavior of systems irrespective of size. Reliability is the fundamental feature, and autonomous, resilient behavior is one useful way to get that. Recommendations You might read the examples in this chapter and decide that you need to be Google-scale before you have anything to do with automation whatsoever. This is untrue, for two reasons: automation provides more than just time saving, so it’s worth implementing in more cases than a simple time-expended versus time-saved calculation might suggest. But the approach with the highest leverage actually occurs in the design phase: shipping and iterating rapidly might allow you to implement functionality faster, yet rarely makes for a resilient system. Autonomous operation is difficult to convincingly retrofit to sufficiently large systems, but standard good practices in software engineering will help considerably: having decoupled subsystems, introducing APIs, minimizing side effects, and so on. Automation: Enabling Failure at Scale Google runs over a dozen of its own large datacenters, but we also depend on machines in many third-party colocation facilities (or "colos"). Our machines in these colos are used to terminate most incoming connections, or as a cache for our own Content Delivery Network, in order to lower end-user latency. At any point in time, a number of these racks are being installed or decommissioned; both of these processes are largely automated. One step during decommission involves overwriting the full content of the disk of all the machines in the rack, after which point an independent system verifies the successful erase. We call this process "Diskerase." Once upon a time, the automation in charge of decommissioning a particular rack failed, but only after the Diskerase step had completed successfully. Later, the decommission process was restarted from the beginning, to debug the failure. On that iteration, when trying to send the set of machines in the rack to Diskerase, the automation determined that the set of machines that still needed to be Diskerased was (correctly) empty. Unfortunately, the empty set was used as a special value, interpreted to mean "everything." This means the automation sent almost all the machines we have in all colos to Diskerase. Within minutes, the highly efficient Diskerase wiped the disks on all machines in our CDN, and the machines were no longer able to terminate connections from users (or do anything else useful). We were still able to serve all the users from our own datacenters, and after a few minutes the only effect visible externally was a slight increase in latency. As far as we could tell, very few users noticed the problem at all, thanks to good capacity planning (at least we got that right!). Meanwhile, we spent the better part of two days reinstalling the machines in the affected colo racks; then we spent the following weeks auditing and adding more sanity checks—including rate limiting—into our automation, and making our decommission workflow idempotent. 26 For readers who already feel they precisely understand the value of automation, skip ahead to The Value for Google SRE . However, note that our description contains some nuances that might be useful to keep in mind while reading the rest of the chapter. 27 The expertise acquired in building such automation is also valuable in itself; engineers both deeply understand the existing processes they have automated and can later automate novel processes more quickly. 28 See the following XKCD cartoon: https://xkcd.com/1205/ . 29 See, for example, https://blog.engineyard.com/2014/pets-vs-cattle . 30 Of course, not every system that needs to be managed actually provides callable APIs for management—forcing some tooling to use, e.g., CLI invocations or automated website clicks. 31 We have compressed and simplified this history to aid understanding. 32 As in a small, unchanging number. 33 See, e.g., https://en.wikipedia.org/wiki/Air_France_Flight_447 . 34 See, e.g., [Bai83] and [Sar97] . 35 This is yet another good reason for regular practice drills; see Disaster Role Playing . Previous Chapter 6 - Monitoring Distributed Systems Next Chapter 8 - Release Engineering Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:20
http://www.trello.com/pricing
Which Trello Plan Is Best for You? Our Pricing Guide Can Help | Trello Skip to main content Features Solutions Plans Pricing Resources Explore the features that help your team succeed Inbox Capture every vital detail from emails, Slack, and more directly into your Trello Inbox. Planner Sync your calendar and allocate focused time slots to boost productivity. Automation Automate tasks and workflows with Trello. Power-Ups Power up your teams by linking their favorite tools with Trello plugins. Templates Give your team a blueprint for success with easy-to-use templates from industry leaders and the Trello community. Integrations Find the apps your team is already using or discover new ways to get work done in Trello. Meet Trello Trello makes it easy for your team to get work done. No matter the project, workflow, or type of team, Trello can help keep things organized. It’s simple – sign-up, create a board, and you’re off! Productivity awaits. Check out Trello Take a page out of these pre-built Trello playbooks designed for all teams Marketing teams Whether launching a new product, campaign, or creating content, Trello helps marketing teams succeed. Product management Use Trello’s management boards and roadmap features to simplify complex projects and processes. Engineering teams Ship more code, faster, and give your developers the freedom to be more agile with Trello. Design teams Empower your design teams by using Trello to streamline creative requests and promote more fluid cross-team collaboration. Startups From hitting revenue goals to managing workflows, small businesses thrive with Trello. Remote teams Keep your remote team connected and motivated, no matter where they’re located around the world. See all teams Our product in action Use case: Task management Track progress of tasks in one convenient place with a visual layout that adds ‘ta-da’ to your to-do’s. Use case: Resource hub Save hours when you give teams a well-designed hub to find information easily and quickly. Use case: Project management Keep projects organized, deadlines on track, and teammates aligned with Trello. See all use cases Standard For teams that need to manage more work and scale collaboration. Premium Best for teams up to 100 that need to track multiple projects and visualize work in a variety of ways. Enterprise Everything your enterprise teams and admins need to manage projects. Free plan For individuals or small teams looking to keep work organized. Take a tour of Trello Compare plans & pricing Whether you’re a team of 2 or 2,000, Trello’s flexible pricing model means you only pay for what you need. View Trello pricing Learn & connect Trello guide Our easy to follow workflow guide will take you from project set-up to Trello expert in no time. Remote work guide The complete guide to setting up your team for remote work success. Webinars Enjoy our free Trello webinars and become a productivity professional. Customer stories See how businesses have adopted Trello as a vital part of their workflow. Developers The sky's the limit in what you can deliver to Trello users in your Power-Up! Help resources Need help? Articles and FAQs to get you unstuck. Back Navigation Features Solutions Plans Pricing Resources Explore the features that help your team succeed Inbox Capture every vital detail from emails, Slack, and more directly into your Trello Inbox. Planner Sync your calendar and allocate focused time slots to boost productivity. Automation Automate tasks and workflows with Trello. Power-Ups Power up your teams by linking their favorite tools with Trello plugins. Templates Give your team a blueprint for success with easy-to-use templates from industry leaders and the Trello community. Integrations Find the apps your team is already using or discover new ways to get work done in Trello. Meet Trello Trello makes it easy for your team to get work done. No matter the project, workflow, or type of team, Trello can help keep things organized. It’s simple – sign-up, create a board, and you’re off! Productivity awaits. Check out Trello Take a page out of these pre-built Trello playbooks designed for all teams Marketing teams Whether launching a new product, campaign, or creating content, Trello helps marketing teams succeed. Product management Use Trello’s management boards and roadmap features to simplify complex projects and processes. Engineering teams Ship more code, faster, and give your developers the freedom to be more agile with Trello. Design teams Empower your design teams by using Trello to streamline creative requests and promote more fluid cross-team collaboration. Startups From hitting revenue goals to managing workflows, small businesses thrive with Trello. Remote teams Keep your remote team connected and motivated, no matter where they’re located around the world. See all teams Our product in action Read though our use cases to make the most of Trello on your team. See all use cases Standard For teams that need to manage more work and scale collaboration. Premium Best for teams up to 100 that need to track multiple projects and visualize work in a variety of ways. Enterprise Everything your enterprise teams and admins need to manage projects. Free plan For individuals or small teams looking to keep work organized. Take a tour of Trello Compare plans & pricing Whether you’re a team of 2 or 2,000, Trello’s flexible pricing model means you only pay for what you need. View Trello pricing Learn & connect Trello guide Our easy to follow workflow guide will take you from project set-up to Trello expert in no time. Remote work guide The complete guide to setting up your team for remote work success. Webinars Enjoy our free Trello webinars and become a productivity professional. Customer stories See how businesses have adopted Trello as a vital part of their workflow. Developers The sky's the limit in what you can deliver to Trello users in your Power-Up! Help resources Need help? Articles and FAQs to get you unstuck. Helping teams work better, together Discover Trello use cases, productivity tips, best practices for team collaboration, and expert remote work advice. Check out the Trello blog Trello your way. Trusted by millions, Trello powers teams all around the world. Explore which option is right for you. Free $ 0 USD Free for up to 10 collaborators per Workspace Capture your to-dos, get organized, and get sh*t done. See what's included Included in Free: Unlimited cards Up to 10 boards per Workspace Quickly capture to-dos from email, Slack, and Teams Inbox Unlimited Power-Ups per board Unlimited storage (10MB/file) 250 Workspace command runs per month Custom backgrounds & stickers Unlimited activity log Assignee and due dates iOS and Android mobile apps 2-factor authentication Standard $ 5 USD Per user/month if billed annually ($6 billed monthly) Get more stuff done with unlimited boards, card mirroring, and more automation. See what's included Everything in Free, plus: Unlimited boards Quickly capture to-dos from email, Slack, and Teams — powered by AI Planner Advanced checklists Card mirroring Custom Fields List colors Collapsible lists Unlimited storage (250MB/file) 1,000 Workspace command runs per month Single board guests Saved searches Learn more about Standard Premium $ 10 USD Per user/month if billed annually ($12.50 billed monthly) Add AI to your boards and admin controls to your toolkit. Plus, get more perspective with views. See what's included Everything in Standard, plus: AI Views: Calendar, Timeline, Table, Dashboard, and Map Workspace views: Table and Calendar Unlimited Workspace command runs Admin and security features Workspace-level templates Collections Observers Simple data export Learn more about Premium Enterprise $ 17.50 USD Per user/month - billed annually ($210.00 annual price per user) Add enterprise-grade security and controls. This plan includes Atlassian Guard Standard and 24/7 Enterprise Admin support. Est. cost for users See what's included Everything in Premium, plus: Unlimited Workspaces Organization-wide permissions Organization-visible boards Public board management Multi-board guests Attachment permissions Power-Up administration Free SSO and user provisioning with Atlassian Guard Learn more about Enterprise Compare our plans Features Free Standard Premium Enterprise Unlimited cards True True True True Quick capture Instantly capture to-dos, notes, and messages, from email, Slack, and Microsoft Teams. True True True True Inbox Gather messages and to-dos in this personal space before organizing them into your boards. True True True True Planner (view-only) View your scheduled cards on a calendar and synchronize events from your favorite tools. True True True True Quick Capture powered by AI Instantly capture to-dos, notes, and messages from email, Slack, and Microsoft Teams using AI. False True True True Planner (full access) Drag and drop cards on a calendar to block any available time. Sync with more events in your favorite tools. False True True True Card mirroring Keep your to-dos aligned across boards. Mirror a single card to multiple boards to view and edit it from anywhere. False True True True Built-in automation Powerful no-code automation is built into every Trello board. Start automating True True True True Assignee and due dates True True True True iOS and Android mobile apps Download mobile apps True True True True Desktop app Download desktop app True True True True Unlimited activity logs True True True True Trello templates Give your team a blueprint for success with tried-and-true templates from the Trello community. Try a template True True True True 2-factor authentication True True True True Mobile device management Enforce security controls on mobile app usage through built-in mobile device management (MDM) support for iOS and Android. True True True True Unlimited power-ups Integrate with more than 200+ apps and tools your team depends on like Slack, Google Drive, Salesforce, and more directly into your Trello boards. (Note: Some Power-Ups by our partners require an additional subscription fee.) Learn more True True True True Unlimited Workspace collaborators Up to 10 collaborators True True True Unlimited boards Create as many Trello boards as your team sees fit; from onboarding new hires, to sprint planning, and every team meeting agenda in between. False True True True Advanced checklists Break down projects into byte-sized tasks and get granular by assigning members and due dates to individual tasks. False True True True Custom fields Bring process and formality to your workflow by structuring information on Trello cards to the task at hand with Custom Fields. False True True True Collapsible lists Keep your Trello boards tidy and focus on what matters by easily hiding and revealing list details. Learn more False True True True List colors Brighten up your boards by using list colors to easily spot priorities and keep your to-dos organized and on track. Learn more False True True True AI Enhance Trello card descriptions and comments effortlessly with AI-driven content generation, grammar correction, and brainstorming. Learn more False False True True Dashboard view Trello’s reporting tool lets you access real-time insights and communicate your team’s progress in a visual and meaningful way. Learn more False False True True Map view Plan an offsite, scout new office locations, or manage product distribution points by adding locations to your cards and visualizing them geographically on a map. Learn more False False True True Timeline view Plan your project, stay on top of every sprint, and see how all of the moving parts fit together over time with Timeline. Learn more False False True True Table View Bring a clearer perspective to all the work happening across a single board in a list format where you can create and edit cards in just a few clicks. Learn more False False True True Calendar view Calendar displays start dates, due dates, and advanced checklist items so you can see what lies ahead for your project and stay on top of today’s to-dos. Learn more False False True True Workspace table view See your projects and tasks across Workspaces and boards in a spreadsheet-style list that can be sorted and filtered to drill down to exactly the cards you need to see. Learn more False False True True Workspace calendar view Workspace calendar displays items with start dates and due dates across your projects and boards, so you can see what lies ahead for all your teamwork. Learn more False False True True Workspace-level templates False False True True Command run administration Premium or Enterprise administrators can disable commands on behalf of other users, and perform other command maintenance. False False True True Board collections Premium and Enterprise teams can use Board Collections to easily group boards together whether by Workspace, department, or major project. False False True True Observers Observers are a Premium security setting that limit a user’s actions within a board. False False True True Domain-restricted invites False False True True Deactivate members False False True True Simple data export False False True True Unlimited command runs False False True True SAML SSO via Atlassian Guard Atlassian Guard is a separate subscription that your company can enable across all your Atlassian products and starts at $4/month/user. Learn more Available Available Available True Unlimited workspaces False False False True Power-up administration False False False True Attachment restrictions False False False True Organization wide permissions False False False True Organization visible boards False False False True Public board management False False False True Support Learn more Community Support Local Business Hours 24/5 Premium Support 24/7 Enterprise Admin Support Didn’t find what you were looking for? Make a suggestion Frequently asked questions   + Does Trello offer a Premium free trial? We sure do. All users can enroll their Workspace in a free trial of Trello Premium . With that trial your Workspace will get access to create unlimited Trello boards, automate as much as you’d like, take advantage of Timeline, Dashboard, and other new views, and much more! + Do you offer any discounted plans? Yes! Trello offers both a non-profit community discount as well as an education discount . + What payment methods do you accept? You can purchase a monthly or annual Trello Standard or Premium subscription with any major credit card. We offer more options for Enterprise customers, if you’re interested in learning more about Trello Enterprise contact our sales team. + How do I cancel my Trello Standard or Premium subscription? If you aren’t 100% satisfied with Trello Standard or Premium you may downgrade at any time. When a team downgrades from Standard or Premium, it retains its Standard or Premium features and unlimited boards until the end of its prepaid service period. At the end of its prepaid service period, it becomes a free Trello Workspace that can hold 10 boards. Learn more about canceling your Standard or Premium subscription here. + How are users counted towards billing? A Trello user who is added as a member to a Workspace—either as a normal member or as a team admin—is considered a billable team member that is included in the cost of Trello Standard or Premium. Any Guest that is on more than one board within the Workspace is considered a Multi-Board Guest and is billed at the same rate as Standard or Premium Workspace members. See this page for more information on Multi-Board Guests. + Is there an on-premises version of Trello? Trello is proudly a cloud-only product. We offer access via the web, desktop, and our awesome mobile apps . + Can I have Trello Standard or Premium just for my own account? Trello Standard and Premium are designed for all teams—even teams of one! To upgrade to Standard or Premium you’ll need to create a Workspace and then upgrade that Workspace. + How secure is Trello? Trello, Inc. (“we”, “us” or “our”) is SOC2 Type 2 certified—we receive and review our data hosting providers’ SOC1 and SOC2 reports every 6 months under NDA. Trello is ISO/IEC 27001 certified which validates our information security management system (ISMS) and the implementation of our security controls. More information is available on the Atlassian Trust Management System. Trello is PCI-DSS certified. Learn more about Trello’s security protocols here . Best-in-class security and centralized administration with Atlassian Guard SAML single sign-on Enforced 2FA Atlassian Guard is a separate subscription that your company can enable across all your Atlassian products and starts at $4/month/user. Trello is an Atlassian product. Learn more Log In About Trello What’s behind the boards. Jobs Learn about open roles on the Trello team. Apps Download the Trello App for your Desktop or Mobile devices. Contact us Need anything? Get in touch and we can help. Čeština Deutsch English Español Français Italiano Magyar Nederlands Norsk (bokmål) Polski Português (Brasil) Suomi Svenska Tiếng Việt Türkçe Русский Українська ภาษาไทย 中文 (简体) 中文 (繁體) 日本語 Notice at Collection Privacy Policy Terms Copyright © 2024 Atlassian
2026-01-13T09:29:20
https://www.linkedin.com/products/cloudflare-spectrum/?trk=products_seo_search
Cloudflare Spectrum | LinkedIn Skip to main content LinkedIn Cloudflare in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Cloudflare Spectrum DDoS Protection Software by Cloudflare See who's skilled in this Add as skill Learn more Report this product About Cloudflare Spectrum increases TCP and UDP security and prevents DDoS attacks for gaming, mail, SSH, and other services. Similar products Cloudflare DDoS Protection Cloudflare DDoS Protection DDoS Protection Software Akamai Prolexic Routed Akamai Prolexic Routed DDoS Protection Software OVHcloud Anti-DDoS Protection OVHcloud Anti-DDoS Protection DDoS Protection Software Kona DDoS Defender Kona DDoS Defender DDoS Protection Software Kaspersky DDoS Protection Kaspersky DDoS Protection DDoS Protection Software Arbor Cloud DDoS Protection Arbor Cloud DDoS Protection DDoS Protection Software Sign in to see more Show more Show less Cloudflare products Argo Smart Routing Argo Smart Routing Network Management Software Cloudflare DDoS Protection Cloudflare DDoS Protection DDoS Protection Software Cloudflare DNS Cloudflare DNS Managed DNS Services Software Cloudflare Load Balancing Cloudflare Load Balancing Load Balancing Software Cloudflare Registrar Cloudflare Registrar Managed DNS Services Software Cloudflare SSL / TLS Cloudflare SSL / TLS SSL Certificates Software Cloudflare WAF Cloudflare WAF Web Application Firewalls (WAF) Cloudflare Web Analytics Cloudflare Web Analytics Digital Analytics Software Cloudflare Workers Cloudflare Workers Server Virtualization Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:20
https://sre.google/sre-book/automation-at-google#fig_automation_prodtest-failed
Google SRE - Google Automation For Reliability Chapter 7 - The Evolution of Automation at Google Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolution of Automation at Google Written by Niall Murphy with John Looney and Michael Kacirek Edited by Betsy Beyer Besides black art, there is only automation and mechanization. Federico García Lorca (1898–1936), Spanish poet and playwright For SRE, automation is a force multiplier, not a panacea. Of course, just multiplying force does not naturally change the accuracy of where that force is applied: doing automation thoughtlessly can create as many problems as it solves. Therefore, while we believe that software-based automation is superior to manual operation in most circumstances, better than either option is a higher-level system design requiring neither of them—an autonomous system. Or to put it another way, the value of automation comes from both what it does and its judicious application. We’ll discuss both the value of automation and how our attitude has evolved over time. The Value of Automation What exactly is the value of automation? 26 Consistency Although scale is an obvious motivation for automation, there are many other reasons to use it. Take the example of university computing systems, where many systems engineering folks started their careers. Systems administrators of that background were generally charged with running a collection of machines or some software, and were accustomed to manually performing various actions in the discharge of that duty. One common example is creating user accounts; others include purely operational duties like making sure backups happen, managing server failover, and small data manipulations like changing the upstream DNS servers’ resolv.conf , DNS server zone data, and similar activities. Ultimately, however, this prevalence of manual tasks is unsatisfactory for both the organizations and indeed the people maintaining systems in this way. For a start, any action performed by a human or humans hundreds of times won’t be performed the same way each time: even with the best will in the world, very few of us will ever be as consistent as a machine. This inevitable lack of consistency leads to mistakes, oversights, issues with data quality, and, yes, reliability problems. In this domain—the execution of well-scoped, known procedures the value of consistency is in many ways the primary value of automation. A Platform Automation doesn’t just provide consistency. Designed and done properly, automatic systems also provide a platform that can be extended, applied to more systems, or perhaps even spun out for profit. 27 (The alternative, no automation, is neither cost effective nor extensible: it is instead a tax levied on the operation of a system.) A platform also centralizes mistakes . In other words, a bug fixed in the code will be fixed there once and forever, unlike a sufficiently large set of humans performing the same procedure, as discussed previously. A platform can be extended to perform additional tasks more easily than humans can be instructed to perform them (or sometimes even realize that they have to be done). Depending on the nature of the task, it can run either continuously or much more frequently than humans could appropriately accomplish the task, or at times that are inconvenient for humans. Furthermore, a platform can export metrics about its performance, or otherwise allow you to discover details about your process you didn’t know previously, because these details are more easily measurable within the context of a platform. Faster Repairs There’s an additional benefit for systems where automation is used to resolve common faults in a system (a frequent situation for SRE-created automation). If automation runs regularly and successfully enough, the result is a reduced mean time to repair (MTTR) for those common faults. You can then spend your time on other tasks instead, thereby achieving increased developer velocity because you don’t have to spend time either preventing a problem or (more commonly) cleaning up after it. As is well understood in the industry, the later in the product lifecycle a problem is discovered, the more expensive it is to fix; see Testing for Reliability . Generally, problems that occur in actual production are most expensive to fix, both in terms of time and money, which means that an automated system looking for problems as soon as they arise has a good chance of lowering the total cost of the system, given that the system is sufficiently large. Faster Action In the infrastructural situations where SRE automation tends to be deployed, humans don’t usually react as fast as machines. In most common cases, where, for example, failover or traffic switching can be well defined for a particular application, it makes no sense to effectively require a human to intermittently press a button called “Allow system to continue to run.” (Yes, it is true that sometimes automatic procedures can end up making a bad situation worse, but that is why such procedures should be scoped over well-defined domains.) Google has a large amount of automation; in many cases, the services we support could not long survive without this automation because they crossed the threshold of manageable manual operation long ago. Time Saving Finally, time saving is an oft-quoted rationale for automation. Although people cite this rationale for automation more than the others, in many ways the benefit is often less immediately calculable. Engineers often waver over whether a particular piece of automation or code is worth writing, in terms of effort saved in not requiring a task to be performed manually versus the effort required to write it. 28 It’s easy to overlook the fact that once you have encapsulated some task in automation, anyone can execute the task. Therefore, the time savings apply across anyone who would plausibly use the automation. Decoupling operator from operation is very powerful. Joseph Bironas, an SRE who led Google’s datacenter turnup efforts for a time, forcefully argued: "If we are engineering processes and solutions that are not automatable, we continue having to staff humans to maintain the system. If we have to staff humans to do the work, we are feeding the machines with the blood, sweat, and tears of human beings. Think The Matrix with less special effects and more pissed off System Administrators." The Value for Google SRE All of these benefits and trade-offs apply to us just as much as anyone else, and Google does have a strong bias toward automation. Part of our preference for automation springs from our particular business challenges: the products and services we look after are planet-spanning in scale, and we don’t typically have time to engage in the same kind of machine or service hand-holding common in other organizations. 29 For truly large services, the factors of consistency, quickness, and reliability dominate most conversations about the trade-offs of performing automation. Another argument in favor of automation, particularly in the case of Google, is our complicated yet surprisingly uniform production environment, described in The Production Environment at Google, from the Viewpoint of an SRE . While other organizations might have an important piece of equipment without a readily accessible API, software for which no source code is available, or another impediment to complete control over production operations, Google generally avoids such scenarios. We have built APIs for systems when no API was available from the vendor. Even though purchasing software for a particular task would have been much cheaper in the short term, we chose to write our own solutions, because doing so produced APIs with the potential for much greater long-term benefits. We spent a lot of time overcoming obstacles to automatic system management, and then resolutely developed that automatic system management itself. Given how Google manages its source code [Pot16] , the availability of that code for more or less any system that SRE touches also means that our mission to “own the product in production” is much easier because we control the entirety of the stack. Of course, although Google is ideologically bent upon using machines to manage machines where possible, reality requires some modification of our approach. It isn’t appropriate to automate every component of every system, and not everyone has the ability or inclination to develop automation at a particular time. Some essential systems started out as quick prototypes, not designed to last or to interface with automation. The previous paragraphs state a maximalist view of our position, but one that we have been broadly successful at putting into action within the Google context. In general, we have chosen to create platforms where we could, or to position ourselves so that we could create platforms over time. We view this platform-based approach as necessary for manageability and scalability. The Use Cases for Automation In the industry, automation is the term generally used for writing code to solve a wide variety of problems, although the motivations for writing this code, and the solutions themselves, are often quite different. More broadly, in this view, automation is “meta-software”—software to act on software. As we implied earlier, there are a number of use cases for automation. Here is a non-exhaustive list of examples: User account creation Cluster turnup and turndown for services Software or hardware installation preparation and decommissioning Rollouts of new software versions Runtime configuration changes A special case of runtime config changes: changes to your dependencies This list could continue essentially ad infinitum . Google SRE’s Use Cases for Automation In Google, we have all of the use cases just listed, and more. However, within Google SRE, our primary affinity has typically been for running infrastructure, as opposed to managing the quality of the data that passes over that infrastructure. This line isn’t totally clear—for example, we care deeply if half of a dataset vanishes after a push, and therefore we alert on coarse-grain differences like this, but it’s rare for us to write the equivalent of changing the properties of some arbitrary subset of accounts on a system. Therefore, the context for our automation is often automation to manage the lifecycle of systems, not their data: for example, deployments of a service in a new cluster. To this extent, SRE’s automation efforts are not far off what many other people and organizations do, except that we use different tools to manage it and have a different focus (as we’ll discuss). Widely available tools like Puppet, Chef, cfengine, and even Perl, which all provide ways to automate particular tasks, differ mostly in terms of the level of abstraction of the components provided to help the act of automating. A full language like Perl provides POSIX-level affordances, which in theory provide an essentially unlimited scope of automation across the APIs accessible to the system, 30 whereas Chef and Puppet provide out-of-the-box abstractions with which services or other higher-level entities can be manipulated. The trade-off here is classic: higher-level abstractions are easier to manage and reason about, but when you encounter a “leaky abstraction,” you fail systemically, repeatedly, and potentially inconsistently. For example, we often assume that pushing a new binary to a cluster is atomic; the cluster will either end up with the old version, or the new version. However, real-world behavior is more complicated: that cluster’s network can fail halfway through; machines can fail; communication to the cluster management layer can fail, leaving the system in an inconsistent state; depending on the situation, new binaries could be staged but not pushed, or pushed but not restarted, or restarted but not verifiable. Very few abstractions model these kinds of outcomes successfully, and most generally end up halting themselves and calling for intervention. Truly bad automation systems don’t even do that. SRE has a number of philosophies and products in the domain of automation, some of which look more like generic rollout tools without particularly detailed modeling of higher-level entities, and some of which look more like languages for describing service deployment (and so on) at a very abstract level. Work done in the latter tends to be more reusable and be more of a common platform than the former, but the complexity of our production environment sometimes means that the former approach is the most immediately tractable option. A Hierarchy of Automation Classes Although all of these automation steps are valuable, and indeed an automation platform is valuable in and of itself, in an ideal world, we wouldn’t need externalized automation. In fact, instead of having a system that has to have external glue logic, it would be even better to have a system that needs no glue logic at all , not just because internalization is more efficient (although such efficiency is useful), but because it has been designed to not need glue logic in the first place. Accomplishing that involves taking the use cases for glue logic—generally “first order” manipulations of a system, such as adding accounts or performing system turnup—and finding a way to handle those use cases directly within the application. As a more detailed example, most turnup automation at Google is problematic because it ends up being maintained separately from the core system and therefore suffers from “bit rot,” i.e., not changing when the underlying systems change. Despite the best of intentions, attempting to more tightly couple the two (turnup automation and the core system) often fails due to unaligned priorities, as product developers will, not unreasonably, resist a test deployment requirement for every change. Secondly, automation that is crucial but only executed at infrequent intervals and therefore difficult to test is often particularly fragile because of the extended feedback cycle. Cluster failover is one classic example of infrequently executed automation: failovers might only occur every few months, or infrequently enough that inconsistencies between instances are introduced. The evolution of automation follows a path: 1) No automation Database master is failed over manually between locations. 2) Externally maintained system-specific automation An SRE has a failover script in his or her home directory. 3) Externally maintained generic automation The SRE adds database support to a "generic failover" script that everyone uses. 4) Internally maintained system-specific automation The database ships with its own failover script. 5) Systems that don’t need any automation The database notices problems, and automatically fails over without human intervention. SRE hates manual operations, so we obviously try to create systems that don’t require them. However, sometimes manual operations are unavoidable. There is additionally a subvariety of automation that applies changes not across the domain of specific system-related configuration, but across the domain of production as a whole. In a highly centralized proprietary production environment like Google’s, there are a large number of changes that have a non–service-specific scope—e.g., changing upstream Chubby servers, a flag change to the Bigtable client library to make access more reliable, and so on—which nonetheless need to be safely managed and rolled back if necessary. Beyond a certain volume of changes, it is infeasible for production-wide changes to be accomplished manually, and at some time before that point, it’s a waste to have manual oversight for a process where a large proportion of the changes are either trivial or accomplished successfully by basic relaunch-and-check strategies. Let’s use internal case studies to illustrate some of the preceding points in detail. The first case study is about how, due to some diligent, far-sighted work, we managed to achieve the self-professed nirvana of SRE: to automate ourselves out of a job. Automate Yourself Out of a Job: Automate ALL the Things! For a long while, the Ads products at Google stored their data in a MySQL database. Because Ads data obviously has high reliability requirements, an SRE team was charged with looking after that infrastructure. From 2005 to 2008, the Ads Database mostly ran in what we considered to be a mature and managed state. For example, we had automated away the worst, but not all, of the routine work for standard replica replacements. We believed the Ads Database was well managed and that we had harvested most of the low-hanging fruit in terms of optimization and scale. However, as daily operations became comfortable, team members began to look at the next level of system development: migrating MySQL onto Google’s cluster scheduling system, Borg. We hoped this migration would provide two main benefits: Completely eliminate machine/replica maintenance: Borg would automatically handle the setup/restart of new and broken tasks. Enable bin-packing of multiple MySQL instances on the same physical machine: Borg would enable more efficient use of machine resources via Containers. In late 2008, we successfully deployed a proof of concept MySQL instance on Borg. Unfortunately, this was accompanied by a significant new difficulty. A core operating characteristic of Borg is that its tasks move around automatically. Tasks commonly move within Borg as frequently as once or twice per week. This frequency was tolerable for our database replicas, but unacceptable for our masters. At that time, the process for master failover took 30–90 minutes per instance. Simply because we ran on shared machines and were subject to reboots for kernel upgrades, in addition to the normal rate of machine failure, we had to expect a number of otherwise unrelated failovers every week. This factor, in combination with the number of shards on which our system was hosted, meant that: Manual failovers would consume a substantial amount of human hours and would give us best-case availability of 99% uptime, which fell short of the actual business requirements of the product. In order to meet our error budgets, each failover would have to take less than 30 seconds of downtime. There was no way to optimize a human-dependent procedure to make downtime shorter than 30 seconds. Therefore, our only choice was to automate failover. Actually, we needed to automate more than just failover. In 2009 Ads SRE completed our automated failover daemon, which we dubbed “Decider.” Decider could complete MySQL failovers for both planned and unplanned failovers in less than 30 seconds 95% of the time. With the creation of Decider, MySQL on Borg (MoB) finally became a reality. We graduated from optimizing our infrastructure for a lack of failover to embracing the idea that failure is inevitable, and therefore optimizing to recover quickly through automation. While automation let us achieve highly available MySQL in a world that forced up to two restarts per week, it did come with its own set of costs. All of our applications had to be changed to include significantly more failure-handling logic than before. Given that the norm in the MySQL development world is to assume that the MySQL instance will be the most stable component in the stack, this switch meant customizing software like JDBC to be more tolerant of our failure-prone environment. However, the benefits of migrating to MoB with Decider were well worth these costs. Once on MoB, the time our team spent on mundane operational tasks dropped by 95%. Our failovers were automated, so an outage of a single database task no longer paged a human. The main upshot of this new automation was that we had a lot more free time to spend on improving other parts of the infrastructure. Such improvements had a cascading effect: the more time we saved, the more time we were able to spend on optimizing and automating other tedious work. Eventually, we were able to automate schema changes, causing the cost of total operational maintenance of the Ads Database to drop by nearly 95%. Some might say that we had successfully automated ourselves out of this job. The hardware side of our domain also saw improvement. Migrating to MoB freed up considerable resources because we could schedule multiple MySQL instances on the same machines, which improved utilization of our hardware. In total, we were able to free up about 60% of our hardware. Our team was now flush with hardware and engineering resources. This example demonstrates the wisdom of going the extra mile to deliver a platform rather than replacing existing manual procedures. The next example comes from the cluster infrastructure group, and illustrates some of the more difficult trade-offs you might encounter on your way to automating all the things. Soothing the Pain: Applying Automation to Cluster Turnups Ten years ago, the Cluster Infrastructure SRE team seemed to get a new hire every few months. As it turned out, that was approximately the same frequency at which we turned up a new cluster. Because turning up a service in a new cluster gives new hires exposure to a service’s internals, this task seemed like a natural and useful training tool. The steps taken to get a cluster ready for use were something like the following: Fit out a datacenter building for power and cooling. Install and configure core switches and connections to the backbone. Install a few initial racks of servers. Configure basic services such as DNS and installers, then configure a lock service, storage, and computing. Deploy the remaining racks of machines. Assign user-facing services resources, so their teams can set up the services. Steps 4 and 6 were extremely complex. While basic services like DNS are relatively simple, the storage and compute subsystems at that time were still in heavy development, so new flags, components, and optimizations were added weekly. Some services had more than a hundred different component subsystems, each with a complex web of dependencies. Failing to configure one subsystem, or configuring a system or component differently than other deployments, is a customer-impacting outage waiting to happen. In one case, a multi-petabyte Bigtable cluster was configured to not use the first (logging) disk on 12-disk systems, for latency reasons. A year later, some automation assumed that if a machine’s first disk wasn’t being used, that machine didn’t have any storage configured; therefore, it was safe to wipe the machine and set it up from scratch. All of the Bigtable data was wiped, instantly. Thankfully we had multiple real-time replicas of the dataset, but such surprises are unwelcome. Automation needs to be careful about relying on implicit "safety" signals. Early automation focused on accelerating cluster delivery. This approach tended to rely upon creative use of SSH for tedious package distribution and service initialization problems. This strategy was an initial win, but those free-form scripts became a cholesterol of technical debt. Detecting Inconsistencies with Prodtest As the numbers of clusters grew, some clusters required hand-tuned flags and settings. As a result, teams wasted more and more time chasing down difficult-to-spot misconfigurations. If a flag that made GFS more responsive to log processing leaked into the default templates, cells with many files could run out of memory under load. Infuriating and time-consuming misconfigurations crept in with nearly every large configuration change. The creative—though brittle—shell scripts we used to configure clusters were neither scaling to the number of people who wanted to make changes nor to the sheer number of cluster permutations that needed to be built. These shell scripts also failed to resolve more significant concerns before declaring that a service was good to take customer-facing traffic, such as: Were all of the service’s dependencies available and correctly configured? Were all configurations and packages consistent with other deployments? Could the team confirm that every configuration exception was desired? Prodtest (Production Test) was an ingenious solution to these unwelcome surprises. We extended the Python unit test framework to allow for unit testing of real-world services. These unit tests have dependencies, allowing a chain of tests, and a failure in one test would quickly abort. Take the test shown in Figure 7-1 as an example. Figure 7-1. ProdTest for DNS Service, showing how one failed test aborts the subsequent chain of tests A given team’s Prodtest was given the cluster name, and it could validate that team’s services in that cluster. Later additions allowed us to generate a graph of the unit tests and their states. This functionality allowed an engineer to see quickly if their service was correctly configured in all clusters, and if not, why. The graph highlighted the failed step, and the failing Python unit test output a more verbose error message. Any time a team encountered a delay due to another team’s unexpected misconfiguration, a bug could be filed to extend their Prodtest. This ensured that a similar problem would be discovered earlier in the future. SREs were proud to be able to assure their customers that all services—both newly turned up services and existing services with new configuration—would reliably serve production traffic. For the first time, our project managers could predict when a cluster could "go live," and had a complete understanding of why each clusters took six or more weeks to go from "network-ready" to "serving live traffic." Out of the blue, SRE received a mission from senior management: In three months, five new clusters will reach network-ready on the same day. Please turn them up in one week. Resolving Inconsistencies Idempotently A "One Week Turnup" was a terrifying mission. We had tens of thousands of lines of shell script owned by dozens of teams. We could quickly tell how unprepared any given cluster was, but fixing it meant that the dozens of teams would have to file hundreds of bugs, and then we had to hope that these bugs would be promptly fixed. We realized that evolving from "Python unit tests finding misconfigurations" to "Python code fixing misconfigurations" could enable us to fix these issues faster. The unit test already knew which cluster we were examining and the specific test that was failing, so we paired each test with a fix. If each fix was written to be idempotent, and could assume that all dependencies were met, resolving the problem should have been easy—and safe—to resolve. Requiring idempotent fixes meant teams could run their "fix script" every 15 minutes without fearing damage to the cluster’s configuration. If the DNS team’s test was blocked on the Machine Database team’s configuration of a new cluster, as soon as the cluster appeared in the database, the DNS team’s tests and fixes would start working. Take the test shown in Figure 7-2 as an example. If TestDnsMonitoringConfigExists fails, as shown, we can call FixDnsMonitoringCreateConfig , which scrapes configuration from a database, then checks a skeleton configuration file into our revision control system. Then TestDnsMonitoringConfigExists passes on retry, and the TestDnsMonitoringConfigPushed test can be attempted. If the test fails, the FixDnsMonitoringPushConfig step runs. If a fix fails multiple times, the automation assumes that the fix failed and stops, notifying the user. Armed with these scripts, a small group of engineers could ensure that we could go from "The network works, and machines are listed in the database" to "Serving 1% of websearch and ads traffic" in a matter of a week or two. At the time, this seemed to be the apex of automation technology. Looking back, this approach was deeply flawed; the latency between the test, the fix, and then a second test introduced flaky tests that sometimes worked and sometimes failed. Not all fixes were naturally idempotent, so a flaky test that was followed by a fix might render the system in an inconsistent state. Figure 7-2. ProdTest for DNS Service, showing that one failed test resulted in only running one fix The Inclination to Specialize Automation processes can vary in three respects: Competence , i.e., their accuracy Latency , how quickly all steps are executed when initiated Relevance , or proportion of real-world process covered by automation We began with a process that was highly competent (maintained and run by the service owners), high-latency (the service owners performed the process in their spare time or assigned it to new engineers), and very relevant (the service owners knew when the real world changed, and could fix the automation). To reduce turnup latency, many service owning teams instructed a single "turnup team" what automation to run. The turnup team used tickets to start each stage in the turnup so that we could track the remaining tasks, and who those tasks were assigned to. If the human interactions regarding automation modules occurred between people in the same room, cluster turnups could happen in a much shorter time. Finally, we had our competent, accurate, and timely automation process! But this state didn’t last long. The real world is chaotic: software, configuration, data, etc. changed, resulting in over a thousand separate changes a day to affected systems. The people most affected by automation bugs were no longer domain experts, so the automation became less relevant (meaning that new steps were missed) and less competent (new flags might have caused automation to fail). However, it took a while for this drop in quality to impact velocity. Automation code, like unit test code, dies when the maintaining team isn’t obsessive about keeping the code in sync with the codebase it covers. The world changes around the code: the DNS team adds new configuration options, the storage team changes their package names, and the networking team needs to support new devices. By relieving teams who ran services of the responsibility to maintain and run their automation code, we created ugly organizational incentives: A team whose primary task is to speed up the current turnup has no incentive to reduce the technical debt of the service-owning team running the service in production later. A team not running automation has no incentive to build systems that are easy to automate. A product manager whose schedule is not affected by low-quality automation will always prioritize new features over simplicity and automation. The most functional tools are usually written by those who use them. A similar argument applies to why product development teams benefit from keeping at least some operational awareness of their systems in production. Turnups were again high-latency, inaccurate, and incompetent—the worst of all worlds. However, an unrelated security mandate allowed us out of this trap. Much of distributed automation relied at that time on SSH. This is clumsy from a security perspective, because people must have root on many machines to run most commands. A growing awareness of advanced, persistent security threats drove us to reduce the privileges SREs enjoyed to the absolute minimum they needed to do their jobs. We had to replace our use of sshd with an authenticated, ACL-driven, RPC-based Local Admin Daemon, also known as Admin Servers, which had permissions to perform those local changes. As a result, no one could install or modify a server without an audit trail. Changes to the Local Admin Daemon and the Package Repo were gated on code reviews, making it very difficult for someone to exceed their authority; giving someone the access to install packages would not let them view colocated logs. The Admin Server logged the RPC requestor, any parameters, and the results of all RPCs to enhance debugging and security audits. Service-Oriented Cluster-Turnup In the next iteration, Admin Servers became part of service teams’ workflows, both as related to the machine-specific Admin Servers (for installing packages and rebooting) and cluster-level Admin Servers (for actions like draining or turning up a service). SREs moved from writing shell scripts in their home directories to building peer-reviewed RPC servers with fine-grained ACLs. Later on, after the realization that turnup processes had to be owned by the teams that owned the services fully sank in, we saw this as a way to approach cluster turnup as a Service-Oriented Architecture (SOA) problem: service owners would be responsible for creating an Admin Server to handle cluster turnup/turndown RPCs, sent by the system that knew when clusters were ready. In turn, each team would provide the contract (API) that the turnup automation needed, while still being free to change the underlying implementation. As a cluster reached "network-ready," automation sent an RPC to each Admin Server that played a part in turning up the cluster. We now have a low-latency, competent, and accurate process; most importantly, this process has stayed strong as the rate of change, the number of teams, and the number of services seem to double each year. As mentioned earlier, our evolution of turnup automation followed a path: Operator-triggered manual action (no automation) Operator-written, system-specific automation Externally maintained generic automation Internally maintained, system-specific automation Autonomous systems that need no human intervention While this evolution has, broadly speaking, been a success, the Borg case study illustrates another way we have come to think of the problem of automation. Borg: Birth of the Warehouse-Scale Computer Another way to understand the development of our attitude toward automation, and when and where that automation is best deployed, is to consider the history of the development of our cluster management systems. 31 Like MySQL on Borg, which demonstrated the success of converting manual operations to automatic ones, and the cluster turnup process, which demonstrated the downside of not thinking carefully enough about where and how automation was implemented, developing cluster management also ended up demonstrating another lesson about how automation should be done. Like our previous two examples, something quite sophisticated was created as the eventual result of continuous evolution from simpler beginnings. Google’s clusters were initially deployed much like everyone else’s small networks of the time: racks of machines with specific purposes and heterogeneous configurations. Engineers would log in to some well-known “master” machine to perform administrative tasks; “golden” binaries and configuration lived on these masters. As we had only one colo provider, most naming logic implicitly assumed that location. As production grew, and we began to use multiple clusters, different domains (cluster names) entered the picture. It became necessary to have a file describing what each machine did, which grouped machines under some loose naming strategy. This descriptor file, in combination with the equivalent of a parallel SSH, allowed us to reboot (for example) all the search machines in one go. Around this time, it was common to get tickets like “search is done with machine x1 , crawl can have the machine now.” Automation development began. Initially automation consisted of simple Python scripts for operations such as the following: Service management: keeping services running (e.g., restarts after segfaults) Tracking what services were supposed to run on which machines Log message parsing: SSHing into each machine and looking for regexps Automation eventually mutated into a proper database that tracked machine state, and also incorporated more sophisticated monitoring tools. With the union set of the automation available, we could now automatically manage much of the lifecycle of machines: noticing when machines were broken, removing the services, sending them to repair, and restoring the configuration when they came back from repair. But to take a step back, this automation was useful yet profoundly limited, due to the fact that abstractions of the system were relentlessly tied to physical machines. We needed a new approach, hence Borg [Ver15] was born: a system that moved away from the relatively static host/port/job assignments of the previous world, toward treating a collection of machines as a managed sea of resources. Central to its success—and its conception—was the notion of turning cluster management into an entity for which API calls could be issued, to some central coordinator. This liberated extra dimensions of efficiency, flexibility, and reliability: unlike the previous model of machine “ownership,” Borg could allow machines to schedule, for example, batch and user-facing tasks on the same machine. This functionality ultimately resulted in continuous and automatic operating system upgrades with a very small amount of constant 32 effort—effort that does not scale with the total size of production deployments. Slight deviations in machine state are now automatically fixed; brokenness and lifecycle management are essentially no-ops for SRE at this point. Thousands of machines are born, die, and go into repairs daily with no SRE effort. To echo the words of Ben Treynor Sloss: by taking the approach that this was a software problem, the initial automation bought us enough time to turn cluster management into something autonomous, as opposed to automated. We achieved this goal by bringing ideas related to data distribution, APIs, hub-and-spoke architectures, and classic distributed system software development to bear upon the domain of infrastructure management. An interesting analogy is possible here: we can make a direct mapping between the single machine case and the development of cluster management abstractions. In this view, rescheduling on another machine looks a lot like a process moving from one CPU to another: of course, those compute resources happen to be at the other end of a network link, but to what extent does that actually matter? Thinking in these terms, rescheduling looks like an intrinsic feature of the system rather than something one would “automate”—humans couldn’t react fast enough anyway. Similarly in the case of cluster turnup: in this metaphor, cluster turnup is simply additional schedulable capacity, a bit like adding disk or RAM to a single computer. However, a single-node computer is not, in general, expected to continue operating when a large number of components fail. The global computer is—it must be self-repairing to operate once it grows past a certain size, due to the essentially statistically guaranteed large number of failures taking place every second. This implies that as we move systems up the hierarchy from manually triggered, to automatically triggered, to autonomous, some capacity for self-introspection is necessary to survive. Reliability Is the Fundamental Feature Of course, for effective troubleshooting, the details of internal operation that the introspection relies upon should also be exposed to the humans managing the overall system. Analogous discussions about the impact of automation in the noncomputer domain—for example, in airplane flight 33 or industrial applications—often point out the downside of highly effective automation: 34 human operators are progressively more relieved of useful direct contact with the system as the automation covers more and more daily activities over time. Inevitably, then, a situation arises in which the automation fails, and the humans are now unable to successfully operate the system. The fluidity of their reactions has been lost due to lack of practice, and their mental models of what the system should be doing no longer reflect the reality of what it is doing. 35 This situation arises more when the system is nonautonomous—i.e., where automation replaces manual actions, and the manual actions are presumed to be always performable and available just as they were before. Sadly, over time, this ultimately becomes false: those manual actions are not always performable because the functionality to permit them no longer exists. We, too, have experienced situations where automation has been actively harmful on a number of occasions—see Automation: Enabling Failure at Scale —but in Google’s experience, there are more systems for which automation or autonomous behavior are no longer optional extras. As you scale, this is of course the case, but there are still strong arguments for more autonomous behavior of systems irrespective of size. Reliability is the fundamental feature, and autonomous, resilient behavior is one useful way to get that. Recommendations You might read the examples in this chapter and decide that you need to be Google-scale before you have anything to do with automation whatsoever. This is untrue, for two reasons: automation provides more than just time saving, so it’s worth implementing in more cases than a simple time-expended versus time-saved calculation might suggest. But the approach with the highest leverage actually occurs in the design phase: shipping and iterating rapidly might allow you to implement functionality faster, yet rarely makes for a resilient system. Autonomous operation is difficult to convincingly retrofit to sufficiently large systems, but standard good practices in software engineering will help considerably: having decoupled subsystems, introducing APIs, minimizing side effects, and so on. Automation: Enabling Failure at Scale Google runs over a dozen of its own large datacenters, but we also depend on machines in many third-party colocation facilities (or "colos"). Our machines in these colos are used to terminate most incoming connections, or as a cache for our own Content Delivery Network, in order to lower end-user latency. At any point in time, a number of these racks are being installed or decommissioned; both of these processes are largely automated. One step during decommission involves overwriting the full content of the disk of all the machines in the rack, after which point an independent system verifies the successful erase. We call this process "Diskerase." Once upon a time, the automation in charge of decommissioning a particular rack failed, but only after the Diskerase step had completed successfully. Later, the decommission process was restarted from the beginning, to debug the failure. On that iteration, when trying to send the set of machines in the rack to Diskerase, the automation determined that the set of machines that still needed to be Diskerased was (correctly) empty. Unfortunately, the empty set was used as a special value, interpreted to mean "everything." This means the automation sent almost all the machines we have in all colos to Diskerase. Within minutes, the highly efficient Diskerase wiped the disks on all machines in our CDN, and the machines were no longer able to terminate connections from users (or do anything else useful). We were still able to serve all the users from our own datacenters, and after a few minutes the only effect visible externally was a slight increase in latency. As far as we could tell, very few users noticed the problem at all, thanks to good capacity planning (at least we got that right!). Meanwhile, we spent the better part of two days reinstalling the machines in the affected colo racks; then we spent the following weeks auditing and adding more sanity checks—including rate limiting—into our automation, and making our decommission workflow idempotent. 26 For readers who already feel they precisely understand the value of automation, skip ahead to The Value for Google SRE . However, note that our description contains some nuances that might be useful to keep in mind while reading the rest of the chapter. 27 The expertise acquired in building such automation is also valuable in itself; engineers both deeply understand the existing processes they have automated and can later automate novel processes more quickly. 28 See the following XKCD cartoon: https://xkcd.com/1205/ . 29 See, for example, https://blog.engineyard.com/2014/pets-vs-cattle . 30 Of course, not every system that needs to be managed actually provides callable APIs for management—forcing some tooling to use, e.g., CLI invocations or automated website clicks. 31 We have compressed and simplified this history to aid understanding. 32 As in a small, unchanging number. 33 See, e.g., https://en.wikipedia.org/wiki/Air_France_Flight_447 . 34 See, e.g., [Bai83] and [Sar97] . 35 This is yet another good reason for regular practice drills; see Disaster Role Playing . Previous Chapter 6 - Monitoring Distributed Systems Next Chapter 8 - Release Engineering Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:20
http://www.trello.com/guide/remote-work
How to Embrace Remote Work | Trello Skip to main content Features Solutions Plans Pricing Resources Explore the features that help your team succeed Inbox Capture every vital detail from emails, Slack, and more directly into your Trello Inbox. Planner Sync your calendar and allocate focused time slots to boost productivity. Automation Automate tasks and workflows with Trello. Power-Ups Power up your teams by linking their favorite tools with Trello plugins. Templates Give your team a blueprint for success with easy-to-use templates from industry leaders and the Trello community. Integrations Find the apps your team is already using or discover new ways to get work done in Trello. Meet Trello Trello makes it easy for your team to get work done. No matter the project, workflow, or type of team, Trello can help keep things organized. It’s simple – sign-up, create a board, and you’re off! Productivity awaits. Check out Trello Take a page out of these pre-built Trello playbooks designed for all teams Marketing teams Whether launching a new product, campaign, or creating content, Trello helps marketing teams succeed. Product management Use Trello’s management boards and roadmap features to simplify complex projects and processes. Engineering teams Ship more code, faster, and give your developers the freedom to be more agile with Trello. Design teams Empower your design teams by using Trello to streamline creative requests and promote more fluid cross-team collaboration. Startups From hitting revenue goals to managing workflows, small businesses thrive with Trello. Remote teams Keep your remote team connected and motivated, no matter where they’re located around the world. See all teams Our product in action Use case: Task management Track progress of tasks in one convenient place with a visual layout that adds ‘ta-da’ to your to-do’s. Use case: Resource hub Save hours when you give teams a well-designed hub to find information easily and quickly. Use case: Project management Keep projects organized, deadlines on track, and teammates aligned with Trello. See all use cases Standard For teams that need to manage more work and scale collaboration. Premium Best for teams up to 100 that need to track multiple projects and visualize work in a variety of ways. Enterprise Everything your enterprise teams and admins need to manage projects. Free plan For individuals or small teams looking to keep work organized. Take a tour of Trello Compare plans & pricing Whether you’re a team of 2 or 2,000, Trello’s flexible pricing model means you only pay for what you need. View Trello pricing Learn & connect Trello guide Our easy to follow workflow guide will take you from project set-up to Trello expert in no time. Remote work guide The complete guide to setting up your team for remote work success. Webinars Enjoy our free Trello webinars and become a productivity professional. Customer stories See how businesses have adopted Trello as a vital part of their workflow. Developers The sky's the limit in what you can deliver to Trello users in your Power-Up! Help resources Need help? Articles and FAQs to get you unstuck. Back Navigation Features Solutions Plans Pricing Resources Explore the features that help your team succeed Inbox Capture every vital detail from emails, Slack, and more directly into your Trello Inbox. Planner Sync your calendar and allocate focused time slots to boost productivity. Automation Automate tasks and workflows with Trello. Power-Ups Power up your teams by linking their favorite tools with Trello plugins. Templates Give your team a blueprint for success with easy-to-use templates from industry leaders and the Trello community. Integrations Find the apps your team is already using or discover new ways to get work done in Trello. Meet Trello Trello makes it easy for your team to get work done. No matter the project, workflow, or type of team, Trello can help keep things organized. It’s simple – sign-up, create a board, and you’re off! Productivity awaits. Check out Trello Take a page out of these pre-built Trello playbooks designed for all teams Marketing teams Whether launching a new product, campaign, or creating content, Trello helps marketing teams succeed. Product management Use Trello’s management boards and roadmap features to simplify complex projects and processes. Engineering teams Ship more code, faster, and give your developers the freedom to be more agile with Trello. Design teams Empower your design teams by using Trello to streamline creative requests and promote more fluid cross-team collaboration. Startups From hitting revenue goals to managing workflows, small businesses thrive with Trello. Remote teams Keep your remote team connected and motivated, no matter where they’re located around the world. See all teams Our product in action Read though our use cases to make the most of Trello on your team. See all use cases Standard For teams that need to manage more work and scale collaboration. Premium Best for teams up to 100 that need to track multiple projects and visualize work in a variety of ways. Enterprise Everything your enterprise teams and admins need to manage projects. Free plan For individuals or small teams looking to keep work organized. Take a tour of Trello Compare plans & pricing Whether you’re a team of 2 or 2,000, Trello’s flexible pricing model means you only pay for what you need. View Trello pricing Learn & connect Trello guide Our easy to follow workflow guide will take you from project set-up to Trello expert in no time. Remote work guide The complete guide to setting up your team for remote work success. Webinars Enjoy our free Trello webinars and become a productivity professional. Customer stories See how businesses have adopted Trello as a vital part of their workflow. Developers The sky's the limit in what you can deliver to Trello users in your Power-Up! Help resources Need help? Articles and FAQs to get you unstuck. Helping teams work better, together Discover Trello use cases, productivity tips, best practices for team collaboration, and expert remote work advice. Check out the Trello blog How to embrace remote work The complete guide to setting up your team for remote work success How to embrace remote work How to embrace remote work Dispelling remote work myths: tips & best practices How to build strong communication and collaboration with a remote team All the tools to make remote work cool How to create a remote team culture Find (and land) the perfect remote job Follow the tips, strategies, and advice from the world’s leading companies in order to empower a productive and collaborative remote team.   Dispelling remote work myths—tips and best practices When the topic of remote work comes up, it’s not uncommon for people to become immediately skeptical. Common narratives include: “That could never work with our system.” “In theory it sounds good, but remote people can’t come to meetings and they never have all the information.” “Yeah, we tried that, but it didn’t really work and the remote people ended up getting fired.” “How do you know people aren’t slacking off?” Yikes. All of these statements are working off of either wrong assumptions or process failures. Remote work is getting a bad name when in fact there are easily identifiable behaviors and policies that are causing the problems. Effective remote work starts at the top. When company culture leaders correct non-remote friendly behaviors and put inclusive processes in place, the effects trickle down into a successful experience for everyone. Read more How to build strong communication and collaboration with a remote team Remote team communication requires two basic things: thoughtful consideration and some adaptations for the virtual office. As more teams go digital and turn to remote work, it’s important to remember that the kinds of nuanced communication you get in an office setting don’t necessarily translate online. Setting some ground rules for team communication goes a long way in making sure your team is productive and happy. Read more Digital tools needed to work remotely Tools matter more in remote work because they are the foundation for communication. You cannot walk up to someone’s desk to talk to them; you need to adapt tools to become your "virtual office." After all, if technology hadn’t advanced to what it is today, remote work wouldn’t even be possible. Here’s a roundup of the most important types of tools you need to consider for remote work, as well as some specific recommendations. Read more How to create a company culture as a remote team One of the biggest concerns when considering remote-friendly work is the perceived culture hit. Workplaces have relied on co-location to build corporate culture for so long that it seems bleak to think of a December without the requisite tinsel-and-punch office holiday party. The key to building great remote relationships is intention. You need to try harder to find common interests, have meaningful meetings, and truly understand each person's perspective. The result can be a lasting network of true friends that you can depend on, no matter where your travels might take you. Creating a strong remote team culture depends on two things: A clear set of "rules to live by" that have 100% buy-in across the company. A healthy system of meetings, events, and habits that keep people communicating. Oh, and don't forget to use a lot of 😄 and 👍 Read more Find and land the right remote job: tips & tricks for interviews & hiring In 2018, 56% of companies around the world allowed employees to work remotely. Remote opportunities aren’t just becoming easier to source, they are being developed by companies who are purposefully building a remote-friendly work culture (and looking for the right candidates to thrive in it). Read more Dispelling remote work myths: tips & best practices Trello has everything you need to get things done. Become one of the millions of people to fall in love with Trello. Sign up—it's free Your Hybrid Work hub Explore curated resources on work-life balance, comms, and project management Learn about Hybrid Work Log In About Trello What’s behind the boards. Jobs Learn about open roles on the Trello team. Apps Download the Trello App for your Desktop or Mobile devices. Contact us Need anything? Get in touch and we can help. Čeština Deutsch English Español Français Italiano Magyar Nederlands Norsk (bokmål) Polski Português (Brasil) Suomi Svenska Tiếng Việt Türkçe Русский Українська ภาษาไทย 中文 (简体) 中文 (繁體) 日本語 Notice at Collection Privacy Policy Terms Copyright © 2024 Atlassian
2026-01-13T09:29:20
https://sre.google/sre-book/automation-at-google#id-QQLukSXFDcMTj
Google SRE - Google Automation For Reliability Chapter 7 - The Evolution of Automation at Google Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolution of Automation at Google Written by Niall Murphy with John Looney and Michael Kacirek Edited by Betsy Beyer Besides black art, there is only automation and mechanization. Federico García Lorca (1898–1936), Spanish poet and playwright For SRE, automation is a force multiplier, not a panacea. Of course, just multiplying force does not naturally change the accuracy of where that force is applied: doing automation thoughtlessly can create as many problems as it solves. Therefore, while we believe that software-based automation is superior to manual operation in most circumstances, better than either option is a higher-level system design requiring neither of them—an autonomous system. Or to put it another way, the value of automation comes from both what it does and its judicious application. We’ll discuss both the value of automation and how our attitude has evolved over time. The Value of Automation What exactly is the value of automation? 26 Consistency Although scale is an obvious motivation for automation, there are many other reasons to use it. Take the example of university computing systems, where many systems engineering folks started their careers. Systems administrators of that background were generally charged with running a collection of machines or some software, and were accustomed to manually performing various actions in the discharge of that duty. One common example is creating user accounts; others include purely operational duties like making sure backups happen, managing server failover, and small data manipulations like changing the upstream DNS servers’ resolv.conf , DNS server zone data, and similar activities. Ultimately, however, this prevalence of manual tasks is unsatisfactory for both the organizations and indeed the people maintaining systems in this way. For a start, any action performed by a human or humans hundreds of times won’t be performed the same way each time: even with the best will in the world, very few of us will ever be as consistent as a machine. This inevitable lack of consistency leads to mistakes, oversights, issues with data quality, and, yes, reliability problems. In this domain—the execution of well-scoped, known procedures the value of consistency is in many ways the primary value of automation. A Platform Automation doesn’t just provide consistency. Designed and done properly, automatic systems also provide a platform that can be extended, applied to more systems, or perhaps even spun out for profit. 27 (The alternative, no automation, is neither cost effective nor extensible: it is instead a tax levied on the operation of a system.) A platform also centralizes mistakes . In other words, a bug fixed in the code will be fixed there once and forever, unlike a sufficiently large set of humans performing the same procedure, as discussed previously. A platform can be extended to perform additional tasks more easily than humans can be instructed to perform them (or sometimes even realize that they have to be done). Depending on the nature of the task, it can run either continuously or much more frequently than humans could appropriately accomplish the task, or at times that are inconvenient for humans. Furthermore, a platform can export metrics about its performance, or otherwise allow you to discover details about your process you didn’t know previously, because these details are more easily measurable within the context of a platform. Faster Repairs There’s an additional benefit for systems where automation is used to resolve common faults in a system (a frequent situation for SRE-created automation). If automation runs regularly and successfully enough, the result is a reduced mean time to repair (MTTR) for those common faults. You can then spend your time on other tasks instead, thereby achieving increased developer velocity because you don’t have to spend time either preventing a problem or (more commonly) cleaning up after it. As is well understood in the industry, the later in the product lifecycle a problem is discovered, the more expensive it is to fix; see Testing for Reliability . Generally, problems that occur in actual production are most expensive to fix, both in terms of time and money, which means that an automated system looking for problems as soon as they arise has a good chance of lowering the total cost of the system, given that the system is sufficiently large. Faster Action In the infrastructural situations where SRE automation tends to be deployed, humans don’t usually react as fast as machines. In most common cases, where, for example, failover or traffic switching can be well defined for a particular application, it makes no sense to effectively require a human to intermittently press a button called “Allow system to continue to run.” (Yes, it is true that sometimes automatic procedures can end up making a bad situation worse, but that is why such procedures should be scoped over well-defined domains.) Google has a large amount of automation; in many cases, the services we support could not long survive without this automation because they crossed the threshold of manageable manual operation long ago. Time Saving Finally, time saving is an oft-quoted rationale for automation. Although people cite this rationale for automation more than the others, in many ways the benefit is often less immediately calculable. Engineers often waver over whether a particular piece of automation or code is worth writing, in terms of effort saved in not requiring a task to be performed manually versus the effort required to write it. 28 It’s easy to overlook the fact that once you have encapsulated some task in automation, anyone can execute the task. Therefore, the time savings apply across anyone who would plausibly use the automation. Decoupling operator from operation is very powerful. Joseph Bironas, an SRE who led Google’s datacenter turnup efforts for a time, forcefully argued: "If we are engineering processes and solutions that are not automatable, we continue having to staff humans to maintain the system. If we have to staff humans to do the work, we are feeding the machines with the blood, sweat, and tears of human beings. Think The Matrix with less special effects and more pissed off System Administrators." The Value for Google SRE All of these benefits and trade-offs apply to us just as much as anyone else, and Google does have a strong bias toward automation. Part of our preference for automation springs from our particular business challenges: the products and services we look after are planet-spanning in scale, and we don’t typically have time to engage in the same kind of machine or service hand-holding common in other organizations. 29 For truly large services, the factors of consistency, quickness, and reliability dominate most conversations about the trade-offs of performing automation. Another argument in favor of automation, particularly in the case of Google, is our complicated yet surprisingly uniform production environment, described in The Production Environment at Google, from the Viewpoint of an SRE . While other organizations might have an important piece of equipment without a readily accessible API, software for which no source code is available, or another impediment to complete control over production operations, Google generally avoids such scenarios. We have built APIs for systems when no API was available from the vendor. Even though purchasing software for a particular task would have been much cheaper in the short term, we chose to write our own solutions, because doing so produced APIs with the potential for much greater long-term benefits. We spent a lot of time overcoming obstacles to automatic system management, and then resolutely developed that automatic system management itself. Given how Google manages its source code [Pot16] , the availability of that code for more or less any system that SRE touches also means that our mission to “own the product in production” is much easier because we control the entirety of the stack. Of course, although Google is ideologically bent upon using machines to manage machines where possible, reality requires some modification of our approach. It isn’t appropriate to automate every component of every system, and not everyone has the ability or inclination to develop automation at a particular time. Some essential systems started out as quick prototypes, not designed to last or to interface with automation. The previous paragraphs state a maximalist view of our position, but one that we have been broadly successful at putting into action within the Google context. In general, we have chosen to create platforms where we could, or to position ourselves so that we could create platforms over time. We view this platform-based approach as necessary for manageability and scalability. The Use Cases for Automation In the industry, automation is the term generally used for writing code to solve a wide variety of problems, although the motivations for writing this code, and the solutions themselves, are often quite different. More broadly, in this view, automation is “meta-software”—software to act on software. As we implied earlier, there are a number of use cases for automation. Here is a non-exhaustive list of examples: User account creation Cluster turnup and turndown for services Software or hardware installation preparation and decommissioning Rollouts of new software versions Runtime configuration changes A special case of runtime config changes: changes to your dependencies This list could continue essentially ad infinitum . Google SRE’s Use Cases for Automation In Google, we have all of the use cases just listed, and more. However, within Google SRE, our primary affinity has typically been for running infrastructure, as opposed to managing the quality of the data that passes over that infrastructure. This line isn’t totally clear—for example, we care deeply if half of a dataset vanishes after a push, and therefore we alert on coarse-grain differences like this, but it’s rare for us to write the equivalent of changing the properties of some arbitrary subset of accounts on a system. Therefore, the context for our automation is often automation to manage the lifecycle of systems, not their data: for example, deployments of a service in a new cluster. To this extent, SRE’s automation efforts are not far off what many other people and organizations do, except that we use different tools to manage it and have a different focus (as we’ll discuss). Widely available tools like Puppet, Chef, cfengine, and even Perl, which all provide ways to automate particular tasks, differ mostly in terms of the level of abstraction of the components provided to help the act of automating. A full language like Perl provides POSIX-level affordances, which in theory provide an essentially unlimited scope of automation across the APIs accessible to the system, 30 whereas Chef and Puppet provide out-of-the-box abstractions with which services or other higher-level entities can be manipulated. The trade-off here is classic: higher-level abstractions are easier to manage and reason about, but when you encounter a “leaky abstraction,” you fail systemically, repeatedly, and potentially inconsistently. For example, we often assume that pushing a new binary to a cluster is atomic; the cluster will either end up with the old version, or the new version. However, real-world behavior is more complicated: that cluster’s network can fail halfway through; machines can fail; communication to the cluster management layer can fail, leaving the system in an inconsistent state; depending on the situation, new binaries could be staged but not pushed, or pushed but not restarted, or restarted but not verifiable. Very few abstractions model these kinds of outcomes successfully, and most generally end up halting themselves and calling for intervention. Truly bad automation systems don’t even do that. SRE has a number of philosophies and products in the domain of automation, some of which look more like generic rollout tools without particularly detailed modeling of higher-level entities, and some of which look more like languages for describing service deployment (and so on) at a very abstract level. Work done in the latter tends to be more reusable and be more of a common platform than the former, but the complexity of our production environment sometimes means that the former approach is the most immediately tractable option. A Hierarchy of Automation Classes Although all of these automation steps are valuable, and indeed an automation platform is valuable in and of itself, in an ideal world, we wouldn’t need externalized automation. In fact, instead of having a system that has to have external glue logic, it would be even better to have a system that needs no glue logic at all , not just because internalization is more efficient (although such efficiency is useful), but because it has been designed to not need glue logic in the first place. Accomplishing that involves taking the use cases for glue logic—generally “first order” manipulations of a system, such as adding accounts or performing system turnup—and finding a way to handle those use cases directly within the application. As a more detailed example, most turnup automation at Google is problematic because it ends up being maintained separately from the core system and therefore suffers from “bit rot,” i.e., not changing when the underlying systems change. Despite the best of intentions, attempting to more tightly couple the two (turnup automation and the core system) often fails due to unaligned priorities, as product developers will, not unreasonably, resist a test deployment requirement for every change. Secondly, automation that is crucial but only executed at infrequent intervals and therefore difficult to test is often particularly fragile because of the extended feedback cycle. Cluster failover is one classic example of infrequently executed automation: failovers might only occur every few months, or infrequently enough that inconsistencies between instances are introduced. The evolution of automation follows a path: 1) No automation Database master is failed over manually between locations. 2) Externally maintained system-specific automation An SRE has a failover script in his or her home directory. 3) Externally maintained generic automation The SRE adds database support to a "generic failover" script that everyone uses. 4) Internally maintained system-specific automation The database ships with its own failover script. 5) Systems that don’t need any automation The database notices problems, and automatically fails over without human intervention. SRE hates manual operations, so we obviously try to create systems that don’t require them. However, sometimes manual operations are unavoidable. There is additionally a subvariety of automation that applies changes not across the domain of specific system-related configuration, but across the domain of production as a whole. In a highly centralized proprietary production environment like Google’s, there are a large number of changes that have a non–service-specific scope—e.g., changing upstream Chubby servers, a flag change to the Bigtable client library to make access more reliable, and so on—which nonetheless need to be safely managed and rolled back if necessary. Beyond a certain volume of changes, it is infeasible for production-wide changes to be accomplished manually, and at some time before that point, it’s a waste to have manual oversight for a process where a large proportion of the changes are either trivial or accomplished successfully by basic relaunch-and-check strategies. Let’s use internal case studies to illustrate some of the preceding points in detail. The first case study is about how, due to some diligent, far-sighted work, we managed to achieve the self-professed nirvana of SRE: to automate ourselves out of a job. Automate Yourself Out of a Job: Automate ALL the Things! For a long while, the Ads products at Google stored their data in a MySQL database. Because Ads data obviously has high reliability requirements, an SRE team was charged with looking after that infrastructure. From 2005 to 2008, the Ads Database mostly ran in what we considered to be a mature and managed state. For example, we had automated away the worst, but not all, of the routine work for standard replica replacements. We believed the Ads Database was well managed and that we had harvested most of the low-hanging fruit in terms of optimization and scale. However, as daily operations became comfortable, team members began to look at the next level of system development: migrating MySQL onto Google’s cluster scheduling system, Borg. We hoped this migration would provide two main benefits: Completely eliminate machine/replica maintenance: Borg would automatically handle the setup/restart of new and broken tasks. Enable bin-packing of multiple MySQL instances on the same physical machine: Borg would enable more efficient use of machine resources via Containers. In late 2008, we successfully deployed a proof of concept MySQL instance on Borg. Unfortunately, this was accompanied by a significant new difficulty. A core operating characteristic of Borg is that its tasks move around automatically. Tasks commonly move within Borg as frequently as once or twice per week. This frequency was tolerable for our database replicas, but unacceptable for our masters. At that time, the process for master failover took 30–90 minutes per instance. Simply because we ran on shared machines and were subject to reboots for kernel upgrades, in addition to the normal rate of machine failure, we had to expect a number of otherwise unrelated failovers every week. This factor, in combination with the number of shards on which our system was hosted, meant that: Manual failovers would consume a substantial amount of human hours and would give us best-case availability of 99% uptime, which fell short of the actual business requirements of the product. In order to meet our error budgets, each failover would have to take less than 30 seconds of downtime. There was no way to optimize a human-dependent procedure to make downtime shorter than 30 seconds. Therefore, our only choice was to automate failover. Actually, we needed to automate more than just failover. In 2009 Ads SRE completed our automated failover daemon, which we dubbed “Decider.” Decider could complete MySQL failovers for both planned and unplanned failovers in less than 30 seconds 95% of the time. With the creation of Decider, MySQL on Borg (MoB) finally became a reality. We graduated from optimizing our infrastructure for a lack of failover to embracing the idea that failure is inevitable, and therefore optimizing to recover quickly through automation. While automation let us achieve highly available MySQL in a world that forced up to two restarts per week, it did come with its own set of costs. All of our applications had to be changed to include significantly more failure-handling logic than before. Given that the norm in the MySQL development world is to assume that the MySQL instance will be the most stable component in the stack, this switch meant customizing software like JDBC to be more tolerant of our failure-prone environment. However, the benefits of migrating to MoB with Decider were well worth these costs. Once on MoB, the time our team spent on mundane operational tasks dropped by 95%. Our failovers were automated, so an outage of a single database task no longer paged a human. The main upshot of this new automation was that we had a lot more free time to spend on improving other parts of the infrastructure. Such improvements had a cascading effect: the more time we saved, the more time we were able to spend on optimizing and automating other tedious work. Eventually, we were able to automate schema changes, causing the cost of total operational maintenance of the Ads Database to drop by nearly 95%. Some might say that we had successfully automated ourselves out of this job. The hardware side of our domain also saw improvement. Migrating to MoB freed up considerable resources because we could schedule multiple MySQL instances on the same machines, which improved utilization of our hardware. In total, we were able to free up about 60% of our hardware. Our team was now flush with hardware and engineering resources. This example demonstrates the wisdom of going the extra mile to deliver a platform rather than replacing existing manual procedures. The next example comes from the cluster infrastructure group, and illustrates some of the more difficult trade-offs you might encounter on your way to automating all the things. Soothing the Pain: Applying Automation to Cluster Turnups Ten years ago, the Cluster Infrastructure SRE team seemed to get a new hire every few months. As it turned out, that was approximately the same frequency at which we turned up a new cluster. Because turning up a service in a new cluster gives new hires exposure to a service’s internals, this task seemed like a natural and useful training tool. The steps taken to get a cluster ready for use were something like the following: Fit out a datacenter building for power and cooling. Install and configure core switches and connections to the backbone. Install a few initial racks of servers. Configure basic services such as DNS and installers, then configure a lock service, storage, and computing. Deploy the remaining racks of machines. Assign user-facing services resources, so their teams can set up the services. Steps 4 and 6 were extremely complex. While basic services like DNS are relatively simple, the storage and compute subsystems at that time were still in heavy development, so new flags, components, and optimizations were added weekly. Some services had more than a hundred different component subsystems, each with a complex web of dependencies. Failing to configure one subsystem, or configuring a system or component differently than other deployments, is a customer-impacting outage waiting to happen. In one case, a multi-petabyte Bigtable cluster was configured to not use the first (logging) disk on 12-disk systems, for latency reasons. A year later, some automation assumed that if a machine’s first disk wasn’t being used, that machine didn’t have any storage configured; therefore, it was safe to wipe the machine and set it up from scratch. All of the Bigtable data was wiped, instantly. Thankfully we had multiple real-time replicas of the dataset, but such surprises are unwelcome. Automation needs to be careful about relying on implicit "safety" signals. Early automation focused on accelerating cluster delivery. This approach tended to rely upon creative use of SSH for tedious package distribution and service initialization problems. This strategy was an initial win, but those free-form scripts became a cholesterol of technical debt. Detecting Inconsistencies with Prodtest As the numbers of clusters grew, some clusters required hand-tuned flags and settings. As a result, teams wasted more and more time chasing down difficult-to-spot misconfigurations. If a flag that made GFS more responsive to log processing leaked into the default templates, cells with many files could run out of memory under load. Infuriating and time-consuming misconfigurations crept in with nearly every large configuration change. The creative—though brittle—shell scripts we used to configure clusters were neither scaling to the number of people who wanted to make changes nor to the sheer number of cluster permutations that needed to be built. These shell scripts also failed to resolve more significant concerns before declaring that a service was good to take customer-facing traffic, such as: Were all of the service’s dependencies available and correctly configured? Were all configurations and packages consistent with other deployments? Could the team confirm that every configuration exception was desired? Prodtest (Production Test) was an ingenious solution to these unwelcome surprises. We extended the Python unit test framework to allow for unit testing of real-world services. These unit tests have dependencies, allowing a chain of tests, and a failure in one test would quickly abort. Take the test shown in Figure 7-1 as an example. Figure 7-1. ProdTest for DNS Service, showing how one failed test aborts the subsequent chain of tests A given team’s Prodtest was given the cluster name, and it could validate that team’s services in that cluster. Later additions allowed us to generate a graph of the unit tests and their states. This functionality allowed an engineer to see quickly if their service was correctly configured in all clusters, and if not, why. The graph highlighted the failed step, and the failing Python unit test output a more verbose error message. Any time a team encountered a delay due to another team’s unexpected misconfiguration, a bug could be filed to extend their Prodtest. This ensured that a similar problem would be discovered earlier in the future. SREs were proud to be able to assure their customers that all services—both newly turned up services and existing services with new configuration—would reliably serve production traffic. For the first time, our project managers could predict when a cluster could "go live," and had a complete understanding of why each clusters took six or more weeks to go from "network-ready" to "serving live traffic." Out of the blue, SRE received a mission from senior management: In three months, five new clusters will reach network-ready on the same day. Please turn them up in one week. Resolving Inconsistencies Idempotently A "One Week Turnup" was a terrifying mission. We had tens of thousands of lines of shell script owned by dozens of teams. We could quickly tell how unprepared any given cluster was, but fixing it meant that the dozens of teams would have to file hundreds of bugs, and then we had to hope that these bugs would be promptly fixed. We realized that evolving from "Python unit tests finding misconfigurations" to "Python code fixing misconfigurations" could enable us to fix these issues faster. The unit test already knew which cluster we were examining and the specific test that was failing, so we paired each test with a fix. If each fix was written to be idempotent, and could assume that all dependencies were met, resolving the problem should have been easy—and safe—to resolve. Requiring idempotent fixes meant teams could run their "fix script" every 15 minutes without fearing damage to the cluster’s configuration. If the DNS team’s test was blocked on the Machine Database team’s configuration of a new cluster, as soon as the cluster appeared in the database, the DNS team’s tests and fixes would start working. Take the test shown in Figure 7-2 as an example. If TestDnsMonitoringConfigExists fails, as shown, we can call FixDnsMonitoringCreateConfig , which scrapes configuration from a database, then checks a skeleton configuration file into our revision control system. Then TestDnsMonitoringConfigExists passes on retry, and the TestDnsMonitoringConfigPushed test can be attempted. If the test fails, the FixDnsMonitoringPushConfig step runs. If a fix fails multiple times, the automation assumes that the fix failed and stops, notifying the user. Armed with these scripts, a small group of engineers could ensure that we could go from "The network works, and machines are listed in the database" to "Serving 1% of websearch and ads traffic" in a matter of a week or two. At the time, this seemed to be the apex of automation technology. Looking back, this approach was deeply flawed; the latency between the test, the fix, and then a second test introduced flaky tests that sometimes worked and sometimes failed. Not all fixes were naturally idempotent, so a flaky test that was followed by a fix might render the system in an inconsistent state. Figure 7-2. ProdTest for DNS Service, showing that one failed test resulted in only running one fix The Inclination to Specialize Automation processes can vary in three respects: Competence , i.e., their accuracy Latency , how quickly all steps are executed when initiated Relevance , or proportion of real-world process covered by automation We began with a process that was highly competent (maintained and run by the service owners), high-latency (the service owners performed the process in their spare time or assigned it to new engineers), and very relevant (the service owners knew when the real world changed, and could fix the automation). To reduce turnup latency, many service owning teams instructed a single "turnup team" what automation to run. The turnup team used tickets to start each stage in the turnup so that we could track the remaining tasks, and who those tasks were assigned to. If the human interactions regarding automation modules occurred between people in the same room, cluster turnups could happen in a much shorter time. Finally, we had our competent, accurate, and timely automation process! But this state didn’t last long. The real world is chaotic: software, configuration, data, etc. changed, resulting in over a thousand separate changes a day to affected systems. The people most affected by automation bugs were no longer domain experts, so the automation became less relevant (meaning that new steps were missed) and less competent (new flags might have caused automation to fail). However, it took a while for this drop in quality to impact velocity. Automation code, like unit test code, dies when the maintaining team isn’t obsessive about keeping the code in sync with the codebase it covers. The world changes around the code: the DNS team adds new configuration options, the storage team changes their package names, and the networking team needs to support new devices. By relieving teams who ran services of the responsibility to maintain and run their automation code, we created ugly organizational incentives: A team whose primary task is to speed up the current turnup has no incentive to reduce the technical debt of the service-owning team running the service in production later. A team not running automation has no incentive to build systems that are easy to automate. A product manager whose schedule is not affected by low-quality automation will always prioritize new features over simplicity and automation. The most functional tools are usually written by those who use them. A similar argument applies to why product development teams benefit from keeping at least some operational awareness of their systems in production. Turnups were again high-latency, inaccurate, and incompetent—the worst of all worlds. However, an unrelated security mandate allowed us out of this trap. Much of distributed automation relied at that time on SSH. This is clumsy from a security perspective, because people must have root on many machines to run most commands. A growing awareness of advanced, persistent security threats drove us to reduce the privileges SREs enjoyed to the absolute minimum they needed to do their jobs. We had to replace our use of sshd with an authenticated, ACL-driven, RPC-based Local Admin Daemon, also known as Admin Servers, which had permissions to perform those local changes. As a result, no one could install or modify a server without an audit trail. Changes to the Local Admin Daemon and the Package Repo were gated on code reviews, making it very difficult for someone to exceed their authority; giving someone the access to install packages would not let them view colocated logs. The Admin Server logged the RPC requestor, any parameters, and the results of all RPCs to enhance debugging and security audits. Service-Oriented Cluster-Turnup In the next iteration, Admin Servers became part of service teams’ workflows, both as related to the machine-specific Admin Servers (for installing packages and rebooting) and cluster-level Admin Servers (for actions like draining or turning up a service). SREs moved from writing shell scripts in their home directories to building peer-reviewed RPC servers with fine-grained ACLs. Later on, after the realization that turnup processes had to be owned by the teams that owned the services fully sank in, we saw this as a way to approach cluster turnup as a Service-Oriented Architecture (SOA) problem: service owners would be responsible for creating an Admin Server to handle cluster turnup/turndown RPCs, sent by the system that knew when clusters were ready. In turn, each team would provide the contract (API) that the turnup automation needed, while still being free to change the underlying implementation. As a cluster reached "network-ready," automation sent an RPC to each Admin Server that played a part in turning up the cluster. We now have a low-latency, competent, and accurate process; most importantly, this process has stayed strong as the rate of change, the number of teams, and the number of services seem to double each year. As mentioned earlier, our evolution of turnup automation followed a path: Operator-triggered manual action (no automation) Operator-written, system-specific automation Externally maintained generic automation Internally maintained, system-specific automation Autonomous systems that need no human intervention While this evolution has, broadly speaking, been a success, the Borg case study illustrates another way we have come to think of the problem of automation. Borg: Birth of the Warehouse-Scale Computer Another way to understand the development of our attitude toward automation, and when and where that automation is best deployed, is to consider the history of the development of our cluster management systems. 31 Like MySQL on Borg, which demonstrated the success of converting manual operations to automatic ones, and the cluster turnup process, which demonstrated the downside of not thinking carefully enough about where and how automation was implemented, developing cluster management also ended up demonstrating another lesson about how automation should be done. Like our previous two examples, something quite sophisticated was created as the eventual result of continuous evolution from simpler beginnings. Google’s clusters were initially deployed much like everyone else’s small networks of the time: racks of machines with specific purposes and heterogeneous configurations. Engineers would log in to some well-known “master” machine to perform administrative tasks; “golden” binaries and configuration lived on these masters. As we had only one colo provider, most naming logic implicitly assumed that location. As production grew, and we began to use multiple clusters, different domains (cluster names) entered the picture. It became necessary to have a file describing what each machine did, which grouped machines under some loose naming strategy. This descriptor file, in combination with the equivalent of a parallel SSH, allowed us to reboot (for example) all the search machines in one go. Around this time, it was common to get tickets like “search is done with machine x1 , crawl can have the machine now.” Automation development began. Initially automation consisted of simple Python scripts for operations such as the following: Service management: keeping services running (e.g., restarts after segfaults) Tracking what services were supposed to run on which machines Log message parsing: SSHing into each machine and looking for regexps Automation eventually mutated into a proper database that tracked machine state, and also incorporated more sophisticated monitoring tools. With the union set of the automation available, we could now automatically manage much of the lifecycle of machines: noticing when machines were broken, removing the services, sending them to repair, and restoring the configuration when they came back from repair. But to take a step back, this automation was useful yet profoundly limited, due to the fact that abstractions of the system were relentlessly tied to physical machines. We needed a new approach, hence Borg [Ver15] was born: a system that moved away from the relatively static host/port/job assignments of the previous world, toward treating a collection of machines as a managed sea of resources. Central to its success—and its conception—was the notion of turning cluster management into an entity for which API calls could be issued, to some central coordinator. This liberated extra dimensions of efficiency, flexibility, and reliability: unlike the previous model of machine “ownership,” Borg could allow machines to schedule, for example, batch and user-facing tasks on the same machine. This functionality ultimately resulted in continuous and automatic operating system upgrades with a very small amount of constant 32 effort—effort that does not scale with the total size of production deployments. Slight deviations in machine state are now automatically fixed; brokenness and lifecycle management are essentially no-ops for SRE at this point. Thousands of machines are born, die, and go into repairs daily with no SRE effort. To echo the words of Ben Treynor Sloss: by taking the approach that this was a software problem, the initial automation bought us enough time to turn cluster management into something autonomous, as opposed to automated. We achieved this goal by bringing ideas related to data distribution, APIs, hub-and-spoke architectures, and classic distributed system software development to bear upon the domain of infrastructure management. An interesting analogy is possible here: we can make a direct mapping between the single machine case and the development of cluster management abstractions. In this view, rescheduling on another machine looks a lot like a process moving from one CPU to another: of course, those compute resources happen to be at the other end of a network link, but to what extent does that actually matter? Thinking in these terms, rescheduling looks like an intrinsic feature of the system rather than something one would “automate”—humans couldn’t react fast enough anyway. Similarly in the case of cluster turnup: in this metaphor, cluster turnup is simply additional schedulable capacity, a bit like adding disk or RAM to a single computer. However, a single-node computer is not, in general, expected to continue operating when a large number of components fail. The global computer is—it must be self-repairing to operate once it grows past a certain size, due to the essentially statistically guaranteed large number of failures taking place every second. This implies that as we move systems up the hierarchy from manually triggered, to automatically triggered, to autonomous, some capacity for self-introspection is necessary to survive. Reliability Is the Fundamental Feature Of course, for effective troubleshooting, the details of internal operation that the introspection relies upon should also be exposed to the humans managing the overall system. Analogous discussions about the impact of automation in the noncomputer domain—for example, in airplane flight 33 or industrial applications—often point out the downside of highly effective automation: 34 human operators are progressively more relieved of useful direct contact with the system as the automation covers more and more daily activities over time. Inevitably, then, a situation arises in which the automation fails, and the humans are now unable to successfully operate the system. The fluidity of their reactions has been lost due to lack of practice, and their mental models of what the system should be doing no longer reflect the reality of what it is doing. 35 This situation arises more when the system is nonautonomous—i.e., where automation replaces manual actions, and the manual actions are presumed to be always performable and available just as they were before. Sadly, over time, this ultimately becomes false: those manual actions are not always performable because the functionality to permit them no longer exists. We, too, have experienced situations where automation has been actively harmful on a number of occasions—see Automation: Enabling Failure at Scale —but in Google’s experience, there are more systems for which automation or autonomous behavior are no longer optional extras. As you scale, this is of course the case, but there are still strong arguments for more autonomous behavior of systems irrespective of size. Reliability is the fundamental feature, and autonomous, resilient behavior is one useful way to get that. Recommendations You might read the examples in this chapter and decide that you need to be Google-scale before you have anything to do with automation whatsoever. This is untrue, for two reasons: automation provides more than just time saving, so it’s worth implementing in more cases than a simple time-expended versus time-saved calculation might suggest. But the approach with the highest leverage actually occurs in the design phase: shipping and iterating rapidly might allow you to implement functionality faster, yet rarely makes for a resilient system. Autonomous operation is difficult to convincingly retrofit to sufficiently large systems, but standard good practices in software engineering will help considerably: having decoupled subsystems, introducing APIs, minimizing side effects, and so on. Automation: Enabling Failure at Scale Google runs over a dozen of its own large datacenters, but we also depend on machines in many third-party colocation facilities (or "colos"). Our machines in these colos are used to terminate most incoming connections, or as a cache for our own Content Delivery Network, in order to lower end-user latency. At any point in time, a number of these racks are being installed or decommissioned; both of these processes are largely automated. One step during decommission involves overwriting the full content of the disk of all the machines in the rack, after which point an independent system verifies the successful erase. We call this process "Diskerase." Once upon a time, the automation in charge of decommissioning a particular rack failed, but only after the Diskerase step had completed successfully. Later, the decommission process was restarted from the beginning, to debug the failure. On that iteration, when trying to send the set of machines in the rack to Diskerase, the automation determined that the set of machines that still needed to be Diskerased was (correctly) empty. Unfortunately, the empty set was used as a special value, interpreted to mean "everything." This means the automation sent almost all the machines we have in all colos to Diskerase. Within minutes, the highly efficient Diskerase wiped the disks on all machines in our CDN, and the machines were no longer able to terminate connections from users (or do anything else useful). We were still able to serve all the users from our own datacenters, and after a few minutes the only effect visible externally was a slight increase in latency. As far as we could tell, very few users noticed the problem at all, thanks to good capacity planning (at least we got that right!). Meanwhile, we spent the better part of two days reinstalling the machines in the affected colo racks; then we spent the following weeks auditing and adding more sanity checks—including rate limiting—into our automation, and making our decommission workflow idempotent. 26 For readers who already feel they precisely understand the value of automation, skip ahead to The Value for Google SRE . However, note that our description contains some nuances that might be useful to keep in mind while reading the rest of the chapter. 27 The expertise acquired in building such automation is also valuable in itself; engineers both deeply understand the existing processes they have automated and can later automate novel processes more quickly. 28 See the following XKCD cartoon: https://xkcd.com/1205/ . 29 See, for example, https://blog.engineyard.com/2014/pets-vs-cattle . 30 Of course, not every system that needs to be managed actually provides callable APIs for management—forcing some tooling to use, e.g., CLI invocations or automated website clicks. 31 We have compressed and simplified this history to aid understanding. 32 As in a small, unchanging number. 33 See, e.g., https://en.wikipedia.org/wiki/Air_France_Flight_447 . 34 See, e.g., [Bai83] and [Sar97] . 35 This is yet another good reason for regular practice drills; see Disaster Role Playing . Previous Chapter 6 - Monitoring Distributed Systems Next Chapter 8 - Release Engineering Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:20
https://sre.google/sre-book/automation-at-google#fig_automation_prodtest
Google SRE - Google Automation For Reliability Chapter 7 - The Evolution of Automation at Google Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolution of Automation at Google Written by Niall Murphy with John Looney and Michael Kacirek Edited by Betsy Beyer Besides black art, there is only automation and mechanization. Federico García Lorca (1898–1936), Spanish poet and playwright For SRE, automation is a force multiplier, not a panacea. Of course, just multiplying force does not naturally change the accuracy of where that force is applied: doing automation thoughtlessly can create as many problems as it solves. Therefore, while we believe that software-based automation is superior to manual operation in most circumstances, better than either option is a higher-level system design requiring neither of them—an autonomous system. Or to put it another way, the value of automation comes from both what it does and its judicious application. We’ll discuss both the value of automation and how our attitude has evolved over time. The Value of Automation What exactly is the value of automation? 26 Consistency Although scale is an obvious motivation for automation, there are many other reasons to use it. Take the example of university computing systems, where many systems engineering folks started their careers. Systems administrators of that background were generally charged with running a collection of machines or some software, and were accustomed to manually performing various actions in the discharge of that duty. One common example is creating user accounts; others include purely operational duties like making sure backups happen, managing server failover, and small data manipulations like changing the upstream DNS servers’ resolv.conf , DNS server zone data, and similar activities. Ultimately, however, this prevalence of manual tasks is unsatisfactory for both the organizations and indeed the people maintaining systems in this way. For a start, any action performed by a human or humans hundreds of times won’t be performed the same way each time: even with the best will in the world, very few of us will ever be as consistent as a machine. This inevitable lack of consistency leads to mistakes, oversights, issues with data quality, and, yes, reliability problems. In this domain—the execution of well-scoped, known procedures the value of consistency is in many ways the primary value of automation. A Platform Automation doesn’t just provide consistency. Designed and done properly, automatic systems also provide a platform that can be extended, applied to more systems, or perhaps even spun out for profit. 27 (The alternative, no automation, is neither cost effective nor extensible: it is instead a tax levied on the operation of a system.) A platform also centralizes mistakes . In other words, a bug fixed in the code will be fixed there once and forever, unlike a sufficiently large set of humans performing the same procedure, as discussed previously. A platform can be extended to perform additional tasks more easily than humans can be instructed to perform them (or sometimes even realize that they have to be done). Depending on the nature of the task, it can run either continuously or much more frequently than humans could appropriately accomplish the task, or at times that are inconvenient for humans. Furthermore, a platform can export metrics about its performance, or otherwise allow you to discover details about your process you didn’t know previously, because these details are more easily measurable within the context of a platform. Faster Repairs There’s an additional benefit for systems where automation is used to resolve common faults in a system (a frequent situation for SRE-created automation). If automation runs regularly and successfully enough, the result is a reduced mean time to repair (MTTR) for those common faults. You can then spend your time on other tasks instead, thereby achieving increased developer velocity because you don’t have to spend time either preventing a problem or (more commonly) cleaning up after it. As is well understood in the industry, the later in the product lifecycle a problem is discovered, the more expensive it is to fix; see Testing for Reliability . Generally, problems that occur in actual production are most expensive to fix, both in terms of time and money, which means that an automated system looking for problems as soon as they arise has a good chance of lowering the total cost of the system, given that the system is sufficiently large. Faster Action In the infrastructural situations where SRE automation tends to be deployed, humans don’t usually react as fast as machines. In most common cases, where, for example, failover or traffic switching can be well defined for a particular application, it makes no sense to effectively require a human to intermittently press a button called “Allow system to continue to run.” (Yes, it is true that sometimes automatic procedures can end up making a bad situation worse, but that is why such procedures should be scoped over well-defined domains.) Google has a large amount of automation; in many cases, the services we support could not long survive without this automation because they crossed the threshold of manageable manual operation long ago. Time Saving Finally, time saving is an oft-quoted rationale for automation. Although people cite this rationale for automation more than the others, in many ways the benefit is often less immediately calculable. Engineers often waver over whether a particular piece of automation or code is worth writing, in terms of effort saved in not requiring a task to be performed manually versus the effort required to write it. 28 It’s easy to overlook the fact that once you have encapsulated some task in automation, anyone can execute the task. Therefore, the time savings apply across anyone who would plausibly use the automation. Decoupling operator from operation is very powerful. Joseph Bironas, an SRE who led Google’s datacenter turnup efforts for a time, forcefully argued: "If we are engineering processes and solutions that are not automatable, we continue having to staff humans to maintain the system. If we have to staff humans to do the work, we are feeding the machines with the blood, sweat, and tears of human beings. Think The Matrix with less special effects and more pissed off System Administrators." The Value for Google SRE All of these benefits and trade-offs apply to us just as much as anyone else, and Google does have a strong bias toward automation. Part of our preference for automation springs from our particular business challenges: the products and services we look after are planet-spanning in scale, and we don’t typically have time to engage in the same kind of machine or service hand-holding common in other organizations. 29 For truly large services, the factors of consistency, quickness, and reliability dominate most conversations about the trade-offs of performing automation. Another argument in favor of automation, particularly in the case of Google, is our complicated yet surprisingly uniform production environment, described in The Production Environment at Google, from the Viewpoint of an SRE . While other organizations might have an important piece of equipment without a readily accessible API, software for which no source code is available, or another impediment to complete control over production operations, Google generally avoids such scenarios. We have built APIs for systems when no API was available from the vendor. Even though purchasing software for a particular task would have been much cheaper in the short term, we chose to write our own solutions, because doing so produced APIs with the potential for much greater long-term benefits. We spent a lot of time overcoming obstacles to automatic system management, and then resolutely developed that automatic system management itself. Given how Google manages its source code [Pot16] , the availability of that code for more or less any system that SRE touches also means that our mission to “own the product in production” is much easier because we control the entirety of the stack. Of course, although Google is ideologically bent upon using machines to manage machines where possible, reality requires some modification of our approach. It isn’t appropriate to automate every component of every system, and not everyone has the ability or inclination to develop automation at a particular time. Some essential systems started out as quick prototypes, not designed to last or to interface with automation. The previous paragraphs state a maximalist view of our position, but one that we have been broadly successful at putting into action within the Google context. In general, we have chosen to create platforms where we could, or to position ourselves so that we could create platforms over time. We view this platform-based approach as necessary for manageability and scalability. The Use Cases for Automation In the industry, automation is the term generally used for writing code to solve a wide variety of problems, although the motivations for writing this code, and the solutions themselves, are often quite different. More broadly, in this view, automation is “meta-software”—software to act on software. As we implied earlier, there are a number of use cases for automation. Here is a non-exhaustive list of examples: User account creation Cluster turnup and turndown for services Software or hardware installation preparation and decommissioning Rollouts of new software versions Runtime configuration changes A special case of runtime config changes: changes to your dependencies This list could continue essentially ad infinitum . Google SRE’s Use Cases for Automation In Google, we have all of the use cases just listed, and more. However, within Google SRE, our primary affinity has typically been for running infrastructure, as opposed to managing the quality of the data that passes over that infrastructure. This line isn’t totally clear—for example, we care deeply if half of a dataset vanishes after a push, and therefore we alert on coarse-grain differences like this, but it’s rare for us to write the equivalent of changing the properties of some arbitrary subset of accounts on a system. Therefore, the context for our automation is often automation to manage the lifecycle of systems, not their data: for example, deployments of a service in a new cluster. To this extent, SRE’s automation efforts are not far off what many other people and organizations do, except that we use different tools to manage it and have a different focus (as we’ll discuss). Widely available tools like Puppet, Chef, cfengine, and even Perl, which all provide ways to automate particular tasks, differ mostly in terms of the level of abstraction of the components provided to help the act of automating. A full language like Perl provides POSIX-level affordances, which in theory provide an essentially unlimited scope of automation across the APIs accessible to the system, 30 whereas Chef and Puppet provide out-of-the-box abstractions with which services or other higher-level entities can be manipulated. The trade-off here is classic: higher-level abstractions are easier to manage and reason about, but when you encounter a “leaky abstraction,” you fail systemically, repeatedly, and potentially inconsistently. For example, we often assume that pushing a new binary to a cluster is atomic; the cluster will either end up with the old version, or the new version. However, real-world behavior is more complicated: that cluster’s network can fail halfway through; machines can fail; communication to the cluster management layer can fail, leaving the system in an inconsistent state; depending on the situation, new binaries could be staged but not pushed, or pushed but not restarted, or restarted but not verifiable. Very few abstractions model these kinds of outcomes successfully, and most generally end up halting themselves and calling for intervention. Truly bad automation systems don’t even do that. SRE has a number of philosophies and products in the domain of automation, some of which look more like generic rollout tools without particularly detailed modeling of higher-level entities, and some of which look more like languages for describing service deployment (and so on) at a very abstract level. Work done in the latter tends to be more reusable and be more of a common platform than the former, but the complexity of our production environment sometimes means that the former approach is the most immediately tractable option. A Hierarchy of Automation Classes Although all of these automation steps are valuable, and indeed an automation platform is valuable in and of itself, in an ideal world, we wouldn’t need externalized automation. In fact, instead of having a system that has to have external glue logic, it would be even better to have a system that needs no glue logic at all , not just because internalization is more efficient (although such efficiency is useful), but because it has been designed to not need glue logic in the first place. Accomplishing that involves taking the use cases for glue logic—generally “first order” manipulations of a system, such as adding accounts or performing system turnup—and finding a way to handle those use cases directly within the application. As a more detailed example, most turnup automation at Google is problematic because it ends up being maintained separately from the core system and therefore suffers from “bit rot,” i.e., not changing when the underlying systems change. Despite the best of intentions, attempting to more tightly couple the two (turnup automation and the core system) often fails due to unaligned priorities, as product developers will, not unreasonably, resist a test deployment requirement for every change. Secondly, automation that is crucial but only executed at infrequent intervals and therefore difficult to test is often particularly fragile because of the extended feedback cycle. Cluster failover is one classic example of infrequently executed automation: failovers might only occur every few months, or infrequently enough that inconsistencies between instances are introduced. The evolution of automation follows a path: 1) No automation Database master is failed over manually between locations. 2) Externally maintained system-specific automation An SRE has a failover script in his or her home directory. 3) Externally maintained generic automation The SRE adds database support to a "generic failover" script that everyone uses. 4) Internally maintained system-specific automation The database ships with its own failover script. 5) Systems that don’t need any automation The database notices problems, and automatically fails over without human intervention. SRE hates manual operations, so we obviously try to create systems that don’t require them. However, sometimes manual operations are unavoidable. There is additionally a subvariety of automation that applies changes not across the domain of specific system-related configuration, but across the domain of production as a whole. In a highly centralized proprietary production environment like Google’s, there are a large number of changes that have a non–service-specific scope—e.g., changing upstream Chubby servers, a flag change to the Bigtable client library to make access more reliable, and so on—which nonetheless need to be safely managed and rolled back if necessary. Beyond a certain volume of changes, it is infeasible for production-wide changes to be accomplished manually, and at some time before that point, it’s a waste to have manual oversight for a process where a large proportion of the changes are either trivial or accomplished successfully by basic relaunch-and-check strategies. Let’s use internal case studies to illustrate some of the preceding points in detail. The first case study is about how, due to some diligent, far-sighted work, we managed to achieve the self-professed nirvana of SRE: to automate ourselves out of a job. Automate Yourself Out of a Job: Automate ALL the Things! For a long while, the Ads products at Google stored their data in a MySQL database. Because Ads data obviously has high reliability requirements, an SRE team was charged with looking after that infrastructure. From 2005 to 2008, the Ads Database mostly ran in what we considered to be a mature and managed state. For example, we had automated away the worst, but not all, of the routine work for standard replica replacements. We believed the Ads Database was well managed and that we had harvested most of the low-hanging fruit in terms of optimization and scale. However, as daily operations became comfortable, team members began to look at the next level of system development: migrating MySQL onto Google’s cluster scheduling system, Borg. We hoped this migration would provide two main benefits: Completely eliminate machine/replica maintenance: Borg would automatically handle the setup/restart of new and broken tasks. Enable bin-packing of multiple MySQL instances on the same physical machine: Borg would enable more efficient use of machine resources via Containers. In late 2008, we successfully deployed a proof of concept MySQL instance on Borg. Unfortunately, this was accompanied by a significant new difficulty. A core operating characteristic of Borg is that its tasks move around automatically. Tasks commonly move within Borg as frequently as once or twice per week. This frequency was tolerable for our database replicas, but unacceptable for our masters. At that time, the process for master failover took 30–90 minutes per instance. Simply because we ran on shared machines and were subject to reboots for kernel upgrades, in addition to the normal rate of machine failure, we had to expect a number of otherwise unrelated failovers every week. This factor, in combination with the number of shards on which our system was hosted, meant that: Manual failovers would consume a substantial amount of human hours and would give us best-case availability of 99% uptime, which fell short of the actual business requirements of the product. In order to meet our error budgets, each failover would have to take less than 30 seconds of downtime. There was no way to optimize a human-dependent procedure to make downtime shorter than 30 seconds. Therefore, our only choice was to automate failover. Actually, we needed to automate more than just failover. In 2009 Ads SRE completed our automated failover daemon, which we dubbed “Decider.” Decider could complete MySQL failovers for both planned and unplanned failovers in less than 30 seconds 95% of the time. With the creation of Decider, MySQL on Borg (MoB) finally became a reality. We graduated from optimizing our infrastructure for a lack of failover to embracing the idea that failure is inevitable, and therefore optimizing to recover quickly through automation. While automation let us achieve highly available MySQL in a world that forced up to two restarts per week, it did come with its own set of costs. All of our applications had to be changed to include significantly more failure-handling logic than before. Given that the norm in the MySQL development world is to assume that the MySQL instance will be the most stable component in the stack, this switch meant customizing software like JDBC to be more tolerant of our failure-prone environment. However, the benefits of migrating to MoB with Decider were well worth these costs. Once on MoB, the time our team spent on mundane operational tasks dropped by 95%. Our failovers were automated, so an outage of a single database task no longer paged a human. The main upshot of this new automation was that we had a lot more free time to spend on improving other parts of the infrastructure. Such improvements had a cascading effect: the more time we saved, the more time we were able to spend on optimizing and automating other tedious work. Eventually, we were able to automate schema changes, causing the cost of total operational maintenance of the Ads Database to drop by nearly 95%. Some might say that we had successfully automated ourselves out of this job. The hardware side of our domain also saw improvement. Migrating to MoB freed up considerable resources because we could schedule multiple MySQL instances on the same machines, which improved utilization of our hardware. In total, we were able to free up about 60% of our hardware. Our team was now flush with hardware and engineering resources. This example demonstrates the wisdom of going the extra mile to deliver a platform rather than replacing existing manual procedures. The next example comes from the cluster infrastructure group, and illustrates some of the more difficult trade-offs you might encounter on your way to automating all the things. Soothing the Pain: Applying Automation to Cluster Turnups Ten years ago, the Cluster Infrastructure SRE team seemed to get a new hire every few months. As it turned out, that was approximately the same frequency at which we turned up a new cluster. Because turning up a service in a new cluster gives new hires exposure to a service’s internals, this task seemed like a natural and useful training tool. The steps taken to get a cluster ready for use were something like the following: Fit out a datacenter building for power and cooling. Install and configure core switches and connections to the backbone. Install a few initial racks of servers. Configure basic services such as DNS and installers, then configure a lock service, storage, and computing. Deploy the remaining racks of machines. Assign user-facing services resources, so their teams can set up the services. Steps 4 and 6 were extremely complex. While basic services like DNS are relatively simple, the storage and compute subsystems at that time were still in heavy development, so new flags, components, and optimizations were added weekly. Some services had more than a hundred different component subsystems, each with a complex web of dependencies. Failing to configure one subsystem, or configuring a system or component differently than other deployments, is a customer-impacting outage waiting to happen. In one case, a multi-petabyte Bigtable cluster was configured to not use the first (logging) disk on 12-disk systems, for latency reasons. A year later, some automation assumed that if a machine’s first disk wasn’t being used, that machine didn’t have any storage configured; therefore, it was safe to wipe the machine and set it up from scratch. All of the Bigtable data was wiped, instantly. Thankfully we had multiple real-time replicas of the dataset, but such surprises are unwelcome. Automation needs to be careful about relying on implicit "safety" signals. Early automation focused on accelerating cluster delivery. This approach tended to rely upon creative use of SSH for tedious package distribution and service initialization problems. This strategy was an initial win, but those free-form scripts became a cholesterol of technical debt. Detecting Inconsistencies with Prodtest As the numbers of clusters grew, some clusters required hand-tuned flags and settings. As a result, teams wasted more and more time chasing down difficult-to-spot misconfigurations. If a flag that made GFS more responsive to log processing leaked into the default templates, cells with many files could run out of memory under load. Infuriating and time-consuming misconfigurations crept in with nearly every large configuration change. The creative—though brittle—shell scripts we used to configure clusters were neither scaling to the number of people who wanted to make changes nor to the sheer number of cluster permutations that needed to be built. These shell scripts also failed to resolve more significant concerns before declaring that a service was good to take customer-facing traffic, such as: Were all of the service’s dependencies available and correctly configured? Were all configurations and packages consistent with other deployments? Could the team confirm that every configuration exception was desired? Prodtest (Production Test) was an ingenious solution to these unwelcome surprises. We extended the Python unit test framework to allow for unit testing of real-world services. These unit tests have dependencies, allowing a chain of tests, and a failure in one test would quickly abort. Take the test shown in Figure 7-1 as an example. Figure 7-1. ProdTest for DNS Service, showing how one failed test aborts the subsequent chain of tests A given team’s Prodtest was given the cluster name, and it could validate that team’s services in that cluster. Later additions allowed us to generate a graph of the unit tests and their states. This functionality allowed an engineer to see quickly if their service was correctly configured in all clusters, and if not, why. The graph highlighted the failed step, and the failing Python unit test output a more verbose error message. Any time a team encountered a delay due to another team’s unexpected misconfiguration, a bug could be filed to extend their Prodtest. This ensured that a similar problem would be discovered earlier in the future. SREs were proud to be able to assure their customers that all services—both newly turned up services and existing services with new configuration—would reliably serve production traffic. For the first time, our project managers could predict when a cluster could "go live," and had a complete understanding of why each clusters took six or more weeks to go from "network-ready" to "serving live traffic." Out of the blue, SRE received a mission from senior management: In three months, five new clusters will reach network-ready on the same day. Please turn them up in one week. Resolving Inconsistencies Idempotently A "One Week Turnup" was a terrifying mission. We had tens of thousands of lines of shell script owned by dozens of teams. We could quickly tell how unprepared any given cluster was, but fixing it meant that the dozens of teams would have to file hundreds of bugs, and then we had to hope that these bugs would be promptly fixed. We realized that evolving from "Python unit tests finding misconfigurations" to "Python code fixing misconfigurations" could enable us to fix these issues faster. The unit test already knew which cluster we were examining and the specific test that was failing, so we paired each test with a fix. If each fix was written to be idempotent, and could assume that all dependencies were met, resolving the problem should have been easy—and safe—to resolve. Requiring idempotent fixes meant teams could run their "fix script" every 15 minutes without fearing damage to the cluster’s configuration. If the DNS team’s test was blocked on the Machine Database team’s configuration of a new cluster, as soon as the cluster appeared in the database, the DNS team’s tests and fixes would start working. Take the test shown in Figure 7-2 as an example. If TestDnsMonitoringConfigExists fails, as shown, we can call FixDnsMonitoringCreateConfig , which scrapes configuration from a database, then checks a skeleton configuration file into our revision control system. Then TestDnsMonitoringConfigExists passes on retry, and the TestDnsMonitoringConfigPushed test can be attempted. If the test fails, the FixDnsMonitoringPushConfig step runs. If a fix fails multiple times, the automation assumes that the fix failed and stops, notifying the user. Armed with these scripts, a small group of engineers could ensure that we could go from "The network works, and machines are listed in the database" to "Serving 1% of websearch and ads traffic" in a matter of a week or two. At the time, this seemed to be the apex of automation technology. Looking back, this approach was deeply flawed; the latency between the test, the fix, and then a second test introduced flaky tests that sometimes worked and sometimes failed. Not all fixes were naturally idempotent, so a flaky test that was followed by a fix might render the system in an inconsistent state. Figure 7-2. ProdTest for DNS Service, showing that one failed test resulted in only running one fix The Inclination to Specialize Automation processes can vary in three respects: Competence , i.e., their accuracy Latency , how quickly all steps are executed when initiated Relevance , or proportion of real-world process covered by automation We began with a process that was highly competent (maintained and run by the service owners), high-latency (the service owners performed the process in their spare time or assigned it to new engineers), and very relevant (the service owners knew when the real world changed, and could fix the automation). To reduce turnup latency, many service owning teams instructed a single "turnup team" what automation to run. The turnup team used tickets to start each stage in the turnup so that we could track the remaining tasks, and who those tasks were assigned to. If the human interactions regarding automation modules occurred between people in the same room, cluster turnups could happen in a much shorter time. Finally, we had our competent, accurate, and timely automation process! But this state didn’t last long. The real world is chaotic: software, configuration, data, etc. changed, resulting in over a thousand separate changes a day to affected systems. The people most affected by automation bugs were no longer domain experts, so the automation became less relevant (meaning that new steps were missed) and less competent (new flags might have caused automation to fail). However, it took a while for this drop in quality to impact velocity. Automation code, like unit test code, dies when the maintaining team isn’t obsessive about keeping the code in sync with the codebase it covers. The world changes around the code: the DNS team adds new configuration options, the storage team changes their package names, and the networking team needs to support new devices. By relieving teams who ran services of the responsibility to maintain and run their automation code, we created ugly organizational incentives: A team whose primary task is to speed up the current turnup has no incentive to reduce the technical debt of the service-owning team running the service in production later. A team not running automation has no incentive to build systems that are easy to automate. A product manager whose schedule is not affected by low-quality automation will always prioritize new features over simplicity and automation. The most functional tools are usually written by those who use them. A similar argument applies to why product development teams benefit from keeping at least some operational awareness of their systems in production. Turnups were again high-latency, inaccurate, and incompetent—the worst of all worlds. However, an unrelated security mandate allowed us out of this trap. Much of distributed automation relied at that time on SSH. This is clumsy from a security perspective, because people must have root on many machines to run most commands. A growing awareness of advanced, persistent security threats drove us to reduce the privileges SREs enjoyed to the absolute minimum they needed to do their jobs. We had to replace our use of sshd with an authenticated, ACL-driven, RPC-based Local Admin Daemon, also known as Admin Servers, which had permissions to perform those local changes. As a result, no one could install or modify a server without an audit trail. Changes to the Local Admin Daemon and the Package Repo were gated on code reviews, making it very difficult for someone to exceed their authority; giving someone the access to install packages would not let them view colocated logs. The Admin Server logged the RPC requestor, any parameters, and the results of all RPCs to enhance debugging and security audits. Service-Oriented Cluster-Turnup In the next iteration, Admin Servers became part of service teams’ workflows, both as related to the machine-specific Admin Servers (for installing packages and rebooting) and cluster-level Admin Servers (for actions like draining or turning up a service). SREs moved from writing shell scripts in their home directories to building peer-reviewed RPC servers with fine-grained ACLs. Later on, after the realization that turnup processes had to be owned by the teams that owned the services fully sank in, we saw this as a way to approach cluster turnup as a Service-Oriented Architecture (SOA) problem: service owners would be responsible for creating an Admin Server to handle cluster turnup/turndown RPCs, sent by the system that knew when clusters were ready. In turn, each team would provide the contract (API) that the turnup automation needed, while still being free to change the underlying implementation. As a cluster reached "network-ready," automation sent an RPC to each Admin Server that played a part in turning up the cluster. We now have a low-latency, competent, and accurate process; most importantly, this process has stayed strong as the rate of change, the number of teams, and the number of services seem to double each year. As mentioned earlier, our evolution of turnup automation followed a path: Operator-triggered manual action (no automation) Operator-written, system-specific automation Externally maintained generic automation Internally maintained, system-specific automation Autonomous systems that need no human intervention While this evolution has, broadly speaking, been a success, the Borg case study illustrates another way we have come to think of the problem of automation. Borg: Birth of the Warehouse-Scale Computer Another way to understand the development of our attitude toward automation, and when and where that automation is best deployed, is to consider the history of the development of our cluster management systems. 31 Like MySQL on Borg, which demonstrated the success of converting manual operations to automatic ones, and the cluster turnup process, which demonstrated the downside of not thinking carefully enough about where and how automation was implemented, developing cluster management also ended up demonstrating another lesson about how automation should be done. Like our previous two examples, something quite sophisticated was created as the eventual result of continuous evolution from simpler beginnings. Google’s clusters were initially deployed much like everyone else’s small networks of the time: racks of machines with specific purposes and heterogeneous configurations. Engineers would log in to some well-known “master” machine to perform administrative tasks; “golden” binaries and configuration lived on these masters. As we had only one colo provider, most naming logic implicitly assumed that location. As production grew, and we began to use multiple clusters, different domains (cluster names) entered the picture. It became necessary to have a file describing what each machine did, which grouped machines under some loose naming strategy. This descriptor file, in combination with the equivalent of a parallel SSH, allowed us to reboot (for example) all the search machines in one go. Around this time, it was common to get tickets like “search is done with machine x1 , crawl can have the machine now.” Automation development began. Initially automation consisted of simple Python scripts for operations such as the following: Service management: keeping services running (e.g., restarts after segfaults) Tracking what services were supposed to run on which machines Log message parsing: SSHing into each machine and looking for regexps Automation eventually mutated into a proper database that tracked machine state, and also incorporated more sophisticated monitoring tools. With the union set of the automation available, we could now automatically manage much of the lifecycle of machines: noticing when machines were broken, removing the services, sending them to repair, and restoring the configuration when they came back from repair. But to take a step back, this automation was useful yet profoundly limited, due to the fact that abstractions of the system were relentlessly tied to physical machines. We needed a new approach, hence Borg [Ver15] was born: a system that moved away from the relatively static host/port/job assignments of the previous world, toward treating a collection of machines as a managed sea of resources. Central to its success—and its conception—was the notion of turning cluster management into an entity for which API calls could be issued, to some central coordinator. This liberated extra dimensions of efficiency, flexibility, and reliability: unlike the previous model of machine “ownership,” Borg could allow machines to schedule, for example, batch and user-facing tasks on the same machine. This functionality ultimately resulted in continuous and automatic operating system upgrades with a very small amount of constant 32 effort—effort that does not scale with the total size of production deployments. Slight deviations in machine state are now automatically fixed; brokenness and lifecycle management are essentially no-ops for SRE at this point. Thousands of machines are born, die, and go into repairs daily with no SRE effort. To echo the words of Ben Treynor Sloss: by taking the approach that this was a software problem, the initial automation bought us enough time to turn cluster management into something autonomous, as opposed to automated. We achieved this goal by bringing ideas related to data distribution, APIs, hub-and-spoke architectures, and classic distributed system software development to bear upon the domain of infrastructure management. An interesting analogy is possible here: we can make a direct mapping between the single machine case and the development of cluster management abstractions. In this view, rescheduling on another machine looks a lot like a process moving from one CPU to another: of course, those compute resources happen to be at the other end of a network link, but to what extent does that actually matter? Thinking in these terms, rescheduling looks like an intrinsic feature of the system rather than something one would “automate”—humans couldn’t react fast enough anyway. Similarly in the case of cluster turnup: in this metaphor, cluster turnup is simply additional schedulable capacity, a bit like adding disk or RAM to a single computer. However, a single-node computer is not, in general, expected to continue operating when a large number of components fail. The global computer is—it must be self-repairing to operate once it grows past a certain size, due to the essentially statistically guaranteed large number of failures taking place every second. This implies that as we move systems up the hierarchy from manually triggered, to automatically triggered, to autonomous, some capacity for self-introspection is necessary to survive. Reliability Is the Fundamental Feature Of course, for effective troubleshooting, the details of internal operation that the introspection relies upon should also be exposed to the humans managing the overall system. Analogous discussions about the impact of automation in the noncomputer domain—for example, in airplane flight 33 or industrial applications—often point out the downside of highly effective automation: 34 human operators are progressively more relieved of useful direct contact with the system as the automation covers more and more daily activities over time. Inevitably, then, a situation arises in which the automation fails, and the humans are now unable to successfully operate the system. The fluidity of their reactions has been lost due to lack of practice, and their mental models of what the system should be doing no longer reflect the reality of what it is doing. 35 This situation arises more when the system is nonautonomous—i.e., where automation replaces manual actions, and the manual actions are presumed to be always performable and available just as they were before. Sadly, over time, this ultimately becomes false: those manual actions are not always performable because the functionality to permit them no longer exists. We, too, have experienced situations where automation has been actively harmful on a number of occasions—see Automation: Enabling Failure at Scale —but in Google’s experience, there are more systems for which automation or autonomous behavior are no longer optional extras. As you scale, this is of course the case, but there are still strong arguments for more autonomous behavior of systems irrespective of size. Reliability is the fundamental feature, and autonomous, resilient behavior is one useful way to get that. Recommendations You might read the examples in this chapter and decide that you need to be Google-scale before you have anything to do with automation whatsoever. This is untrue, for two reasons: automation provides more than just time saving, so it’s worth implementing in more cases than a simple time-expended versus time-saved calculation might suggest. But the approach with the highest leverage actually occurs in the design phase: shipping and iterating rapidly might allow you to implement functionality faster, yet rarely makes for a resilient system. Autonomous operation is difficult to convincingly retrofit to sufficiently large systems, but standard good practices in software engineering will help considerably: having decoupled subsystems, introducing APIs, minimizing side effects, and so on. Automation: Enabling Failure at Scale Google runs over a dozen of its own large datacenters, but we also depend on machines in many third-party colocation facilities (or "colos"). Our machines in these colos are used to terminate most incoming connections, or as a cache for our own Content Delivery Network, in order to lower end-user latency. At any point in time, a number of these racks are being installed or decommissioned; both of these processes are largely automated. One step during decommission involves overwriting the full content of the disk of all the machines in the rack, after which point an independent system verifies the successful erase. We call this process "Diskerase." Once upon a time, the automation in charge of decommissioning a particular rack failed, but only after the Diskerase step had completed successfully. Later, the decommission process was restarted from the beginning, to debug the failure. On that iteration, when trying to send the set of machines in the rack to Diskerase, the automation determined that the set of machines that still needed to be Diskerased was (correctly) empty. Unfortunately, the empty set was used as a special value, interpreted to mean "everything." This means the automation sent almost all the machines we have in all colos to Diskerase. Within minutes, the highly efficient Diskerase wiped the disks on all machines in our CDN, and the machines were no longer able to terminate connections from users (or do anything else useful). We were still able to serve all the users from our own datacenters, and after a few minutes the only effect visible externally was a slight increase in latency. As far as we could tell, very few users noticed the problem at all, thanks to good capacity planning (at least we got that right!). Meanwhile, we spent the better part of two days reinstalling the machines in the affected colo racks; then we spent the following weeks auditing and adding more sanity checks—including rate limiting—into our automation, and making our decommission workflow idempotent. 26 For readers who already feel they precisely understand the value of automation, skip ahead to The Value for Google SRE . However, note that our description contains some nuances that might be useful to keep in mind while reading the rest of the chapter. 27 The expertise acquired in building such automation is also valuable in itself; engineers both deeply understand the existing processes they have automated and can later automate novel processes more quickly. 28 See the following XKCD cartoon: https://xkcd.com/1205/ . 29 See, for example, https://blog.engineyard.com/2014/pets-vs-cattle . 30 Of course, not every system that needs to be managed actually provides callable APIs for management—forcing some tooling to use, e.g., CLI invocations or automated website clicks. 31 We have compressed and simplified this history to aid understanding. 32 As in a small, unchanging number. 33 See, e.g., https://en.wikipedia.org/wiki/Air_France_Flight_447 . 34 See, e.g., [Bai83] and [Sar97] . 35 This is yet another good reason for regular practice drills; see Disaster Role Playing . Previous Chapter 6 - Monitoring Distributed Systems Next Chapter 8 - Release Engineering Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:20
https://sre.google/spotlight/
Google SRE - Hear insights from SRE engineer experts Site Reliability Engineering Jump to Content Home Resources Latest resources Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Distributed PubSub Distributed Image Server The Art of SLO Latest resources Resources overview Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Books overview Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa Mobaa overview 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Classroom overview Distributed PubSub Distributed Image Server The Art of SLO Books Careers Cloud Local Prodcast Spotlight Site Reliability Engineering Jump to Content Hear from SRE Experts SREs literally have the weight of Google services and infrastructure running on their shoulders. We are the people that keep all of the Google services up and running. That has always fascinated me, and is where I want to be: I want to be the person who gets to the root of issues. But also, I like the idea of knowing that I literally have the power of the internet in my hands. Jessica Theodat Security Tech Lead, Site Reliability Learn more about SRE from this conversation Engineering reliability can go in a ton of different directions. And for me, that's part of why I find this discipline so fascinating. Engineering reliability is about questions. It's about asking your developers—and asking them to ask their customers—"What does reliability mean? What about your service matters if it's up and running? What would a user consider to be broken?". And then you focus on the work that impacts the aspects that actually matter. Jennifer Mace Staff Software Engineer Learn more about SRE from this conversation SRE isn't necessarily about automating yourself out of a job. It's wondering what to automate so you can shift the complexity where you actually care to have it. For example, automating configuration deployment might allow you to offer more extensibility to users. Spend your complexity where your value is, not on the underlying supporting beams. Pierre Palatin Staff Software Engineer Learn more about SRE from this conversation Early on in my SRE journey, my tech lead said something that has really stuck with me: he said that SREs are usually the people who run towards fires rather than away from them. In my experience, that has absolutely held true. It's one of the most rewarding aspects of the job—to be on the front lines of our services that billions of people use and rely upon. Megan Yin Software Engineer, Site Reliability Learn more about SRE from this conversation We want to encourage a culture in which people are not afraid to take risks. If we want innovation, we need to take those risks. And if we want to take risks, we need to accept that failure will happen. And instead of focusing on the people, we need to focus on the system and processes that allowed it to happen. Ayelet Sachto Automation and Incident Management TL, Site Reliability Learn more about SRE from this conversation SRE fuses the disciplines of software engineering and systems engineering with a deep understanding of how people and software interact at scale. By working across boundaries, Site Reliability Engineers drive transformational innovation in reliability and efficiency. Joan Smith Principal Engineer, Office of the CFO Fundamentally, it's what happens when you ask a software engineer to design an operations function. Ben Treynor Sloss Vice President, Google Engineering, Founder of Google SRE Learn more about SRE from this conversation 1 2 3 4 5 6 7 Interested in joining SRE? Google strives to cultivate an inclusive workplace. We believe diversity of perspectives and ideas leads to better discussions, decisions, and outcomes for everyone. Explore SRE opportunities at Google Follow us About Google Google products Privacy Terms Help
2026-01-13T09:29:20
https://w3techs.com/about
About Us advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here W3Techs - About Us W3Techs is a division of Q-Success Web-based Services . The goal is to collect information about the usage of various types of technologies used for building and running websites, and to produce and publish surveys that give insights into that subject. Our company has no affiliation with any of the technology providers, which we cover in our surveys. About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://sre.google/sre-book/automation-at-google#id-dA2ubFyF9TY
Google SRE - Google Automation For Reliability Chapter 7 - The Evolution of Automation at Google Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolution of Automation at Google Written by Niall Murphy with John Looney and Michael Kacirek Edited by Betsy Beyer Besides black art, there is only automation and mechanization. Federico García Lorca (1898–1936), Spanish poet and playwright For SRE, automation is a force multiplier, not a panacea. Of course, just multiplying force does not naturally change the accuracy of where that force is applied: doing automation thoughtlessly can create as many problems as it solves. Therefore, while we believe that software-based automation is superior to manual operation in most circumstances, better than either option is a higher-level system design requiring neither of them—an autonomous system. Or to put it another way, the value of automation comes from both what it does and its judicious application. We’ll discuss both the value of automation and how our attitude has evolved over time. The Value of Automation What exactly is the value of automation? 26 Consistency Although scale is an obvious motivation for automation, there are many other reasons to use it. Take the example of university computing systems, where many systems engineering folks started their careers. Systems administrators of that background were generally charged with running a collection of machines or some software, and were accustomed to manually performing various actions in the discharge of that duty. One common example is creating user accounts; others include purely operational duties like making sure backups happen, managing server failover, and small data manipulations like changing the upstream DNS servers’ resolv.conf , DNS server zone data, and similar activities. Ultimately, however, this prevalence of manual tasks is unsatisfactory for both the organizations and indeed the people maintaining systems in this way. For a start, any action performed by a human or humans hundreds of times won’t be performed the same way each time: even with the best will in the world, very few of us will ever be as consistent as a machine. This inevitable lack of consistency leads to mistakes, oversights, issues with data quality, and, yes, reliability problems. In this domain—the execution of well-scoped, known procedures the value of consistency is in many ways the primary value of automation. A Platform Automation doesn’t just provide consistency. Designed and done properly, automatic systems also provide a platform that can be extended, applied to more systems, or perhaps even spun out for profit. 27 (The alternative, no automation, is neither cost effective nor extensible: it is instead a tax levied on the operation of a system.) A platform also centralizes mistakes . In other words, a bug fixed in the code will be fixed there once and forever, unlike a sufficiently large set of humans performing the same procedure, as discussed previously. A platform can be extended to perform additional tasks more easily than humans can be instructed to perform them (or sometimes even realize that they have to be done). Depending on the nature of the task, it can run either continuously or much more frequently than humans could appropriately accomplish the task, or at times that are inconvenient for humans. Furthermore, a platform can export metrics about its performance, or otherwise allow you to discover details about your process you didn’t know previously, because these details are more easily measurable within the context of a platform. Faster Repairs There’s an additional benefit for systems where automation is used to resolve common faults in a system (a frequent situation for SRE-created automation). If automation runs regularly and successfully enough, the result is a reduced mean time to repair (MTTR) for those common faults. You can then spend your time on other tasks instead, thereby achieving increased developer velocity because you don’t have to spend time either preventing a problem or (more commonly) cleaning up after it. As is well understood in the industry, the later in the product lifecycle a problem is discovered, the more expensive it is to fix; see Testing for Reliability . Generally, problems that occur in actual production are most expensive to fix, both in terms of time and money, which means that an automated system looking for problems as soon as they arise has a good chance of lowering the total cost of the system, given that the system is sufficiently large. Faster Action In the infrastructural situations where SRE automation tends to be deployed, humans don’t usually react as fast as machines. In most common cases, where, for example, failover or traffic switching can be well defined for a particular application, it makes no sense to effectively require a human to intermittently press a button called “Allow system to continue to run.” (Yes, it is true that sometimes automatic procedures can end up making a bad situation worse, but that is why such procedures should be scoped over well-defined domains.) Google has a large amount of automation; in many cases, the services we support could not long survive without this automation because they crossed the threshold of manageable manual operation long ago. Time Saving Finally, time saving is an oft-quoted rationale for automation. Although people cite this rationale for automation more than the others, in many ways the benefit is often less immediately calculable. Engineers often waver over whether a particular piece of automation or code is worth writing, in terms of effort saved in not requiring a task to be performed manually versus the effort required to write it. 28 It’s easy to overlook the fact that once you have encapsulated some task in automation, anyone can execute the task. Therefore, the time savings apply across anyone who would plausibly use the automation. Decoupling operator from operation is very powerful. Joseph Bironas, an SRE who led Google’s datacenter turnup efforts for a time, forcefully argued: "If we are engineering processes and solutions that are not automatable, we continue having to staff humans to maintain the system. If we have to staff humans to do the work, we are feeding the machines with the blood, sweat, and tears of human beings. Think The Matrix with less special effects and more pissed off System Administrators." The Value for Google SRE All of these benefits and trade-offs apply to us just as much as anyone else, and Google does have a strong bias toward automation. Part of our preference for automation springs from our particular business challenges: the products and services we look after are planet-spanning in scale, and we don’t typically have time to engage in the same kind of machine or service hand-holding common in other organizations. 29 For truly large services, the factors of consistency, quickness, and reliability dominate most conversations about the trade-offs of performing automation. Another argument in favor of automation, particularly in the case of Google, is our complicated yet surprisingly uniform production environment, described in The Production Environment at Google, from the Viewpoint of an SRE . While other organizations might have an important piece of equipment without a readily accessible API, software for which no source code is available, or another impediment to complete control over production operations, Google generally avoids such scenarios. We have built APIs for systems when no API was available from the vendor. Even though purchasing software for a particular task would have been much cheaper in the short term, we chose to write our own solutions, because doing so produced APIs with the potential for much greater long-term benefits. We spent a lot of time overcoming obstacles to automatic system management, and then resolutely developed that automatic system management itself. Given how Google manages its source code [Pot16] , the availability of that code for more or less any system that SRE touches also means that our mission to “own the product in production” is much easier because we control the entirety of the stack. Of course, although Google is ideologically bent upon using machines to manage machines where possible, reality requires some modification of our approach. It isn’t appropriate to automate every component of every system, and not everyone has the ability or inclination to develop automation at a particular time. Some essential systems started out as quick prototypes, not designed to last or to interface with automation. The previous paragraphs state a maximalist view of our position, but one that we have been broadly successful at putting into action within the Google context. In general, we have chosen to create platforms where we could, or to position ourselves so that we could create platforms over time. We view this platform-based approach as necessary for manageability and scalability. The Use Cases for Automation In the industry, automation is the term generally used for writing code to solve a wide variety of problems, although the motivations for writing this code, and the solutions themselves, are often quite different. More broadly, in this view, automation is “meta-software”—software to act on software. As we implied earlier, there are a number of use cases for automation. Here is a non-exhaustive list of examples: User account creation Cluster turnup and turndown for services Software or hardware installation preparation and decommissioning Rollouts of new software versions Runtime configuration changes A special case of runtime config changes: changes to your dependencies This list could continue essentially ad infinitum . Google SRE’s Use Cases for Automation In Google, we have all of the use cases just listed, and more. However, within Google SRE, our primary affinity has typically been for running infrastructure, as opposed to managing the quality of the data that passes over that infrastructure. This line isn’t totally clear—for example, we care deeply if half of a dataset vanishes after a push, and therefore we alert on coarse-grain differences like this, but it’s rare for us to write the equivalent of changing the properties of some arbitrary subset of accounts on a system. Therefore, the context for our automation is often automation to manage the lifecycle of systems, not their data: for example, deployments of a service in a new cluster. To this extent, SRE’s automation efforts are not far off what many other people and organizations do, except that we use different tools to manage it and have a different focus (as we’ll discuss). Widely available tools like Puppet, Chef, cfengine, and even Perl, which all provide ways to automate particular tasks, differ mostly in terms of the level of abstraction of the components provided to help the act of automating. A full language like Perl provides POSIX-level affordances, which in theory provide an essentially unlimited scope of automation across the APIs accessible to the system, 30 whereas Chef and Puppet provide out-of-the-box abstractions with which services or other higher-level entities can be manipulated. The trade-off here is classic: higher-level abstractions are easier to manage and reason about, but when you encounter a “leaky abstraction,” you fail systemically, repeatedly, and potentially inconsistently. For example, we often assume that pushing a new binary to a cluster is atomic; the cluster will either end up with the old version, or the new version. However, real-world behavior is more complicated: that cluster’s network can fail halfway through; machines can fail; communication to the cluster management layer can fail, leaving the system in an inconsistent state; depending on the situation, new binaries could be staged but not pushed, or pushed but not restarted, or restarted but not verifiable. Very few abstractions model these kinds of outcomes successfully, and most generally end up halting themselves and calling for intervention. Truly bad automation systems don’t even do that. SRE has a number of philosophies and products in the domain of automation, some of which look more like generic rollout tools without particularly detailed modeling of higher-level entities, and some of which look more like languages for describing service deployment (and so on) at a very abstract level. Work done in the latter tends to be more reusable and be more of a common platform than the former, but the complexity of our production environment sometimes means that the former approach is the most immediately tractable option. A Hierarchy of Automation Classes Although all of these automation steps are valuable, and indeed an automation platform is valuable in and of itself, in an ideal world, we wouldn’t need externalized automation. In fact, instead of having a system that has to have external glue logic, it would be even better to have a system that needs no glue logic at all , not just because internalization is more efficient (although such efficiency is useful), but because it has been designed to not need glue logic in the first place. Accomplishing that involves taking the use cases for glue logic—generally “first order” manipulations of a system, such as adding accounts or performing system turnup—and finding a way to handle those use cases directly within the application. As a more detailed example, most turnup automation at Google is problematic because it ends up being maintained separately from the core system and therefore suffers from “bit rot,” i.e., not changing when the underlying systems change. Despite the best of intentions, attempting to more tightly couple the two (turnup automation and the core system) often fails due to unaligned priorities, as product developers will, not unreasonably, resist a test deployment requirement for every change. Secondly, automation that is crucial but only executed at infrequent intervals and therefore difficult to test is often particularly fragile because of the extended feedback cycle. Cluster failover is one classic example of infrequently executed automation: failovers might only occur every few months, or infrequently enough that inconsistencies between instances are introduced. The evolution of automation follows a path: 1) No automation Database master is failed over manually between locations. 2) Externally maintained system-specific automation An SRE has a failover script in his or her home directory. 3) Externally maintained generic automation The SRE adds database support to a "generic failover" script that everyone uses. 4) Internally maintained system-specific automation The database ships with its own failover script. 5) Systems that don’t need any automation The database notices problems, and automatically fails over without human intervention. SRE hates manual operations, so we obviously try to create systems that don’t require them. However, sometimes manual operations are unavoidable. There is additionally a subvariety of automation that applies changes not across the domain of specific system-related configuration, but across the domain of production as a whole. In a highly centralized proprietary production environment like Google’s, there are a large number of changes that have a non–service-specific scope—e.g., changing upstream Chubby servers, a flag change to the Bigtable client library to make access more reliable, and so on—which nonetheless need to be safely managed and rolled back if necessary. Beyond a certain volume of changes, it is infeasible for production-wide changes to be accomplished manually, and at some time before that point, it’s a waste to have manual oversight for a process where a large proportion of the changes are either trivial or accomplished successfully by basic relaunch-and-check strategies. Let’s use internal case studies to illustrate some of the preceding points in detail. The first case study is about how, due to some diligent, far-sighted work, we managed to achieve the self-professed nirvana of SRE: to automate ourselves out of a job. Automate Yourself Out of a Job: Automate ALL the Things! For a long while, the Ads products at Google stored their data in a MySQL database. Because Ads data obviously has high reliability requirements, an SRE team was charged with looking after that infrastructure. From 2005 to 2008, the Ads Database mostly ran in what we considered to be a mature and managed state. For example, we had automated away the worst, but not all, of the routine work for standard replica replacements. We believed the Ads Database was well managed and that we had harvested most of the low-hanging fruit in terms of optimization and scale. However, as daily operations became comfortable, team members began to look at the next level of system development: migrating MySQL onto Google’s cluster scheduling system, Borg. We hoped this migration would provide two main benefits: Completely eliminate machine/replica maintenance: Borg would automatically handle the setup/restart of new and broken tasks. Enable bin-packing of multiple MySQL instances on the same physical machine: Borg would enable more efficient use of machine resources via Containers. In late 2008, we successfully deployed a proof of concept MySQL instance on Borg. Unfortunately, this was accompanied by a significant new difficulty. A core operating characteristic of Borg is that its tasks move around automatically. Tasks commonly move within Borg as frequently as once or twice per week. This frequency was tolerable for our database replicas, but unacceptable for our masters. At that time, the process for master failover took 30–90 minutes per instance. Simply because we ran on shared machines and were subject to reboots for kernel upgrades, in addition to the normal rate of machine failure, we had to expect a number of otherwise unrelated failovers every week. This factor, in combination with the number of shards on which our system was hosted, meant that: Manual failovers would consume a substantial amount of human hours and would give us best-case availability of 99% uptime, which fell short of the actual business requirements of the product. In order to meet our error budgets, each failover would have to take less than 30 seconds of downtime. There was no way to optimize a human-dependent procedure to make downtime shorter than 30 seconds. Therefore, our only choice was to automate failover. Actually, we needed to automate more than just failover. In 2009 Ads SRE completed our automated failover daemon, which we dubbed “Decider.” Decider could complete MySQL failovers for both planned and unplanned failovers in less than 30 seconds 95% of the time. With the creation of Decider, MySQL on Borg (MoB) finally became a reality. We graduated from optimizing our infrastructure for a lack of failover to embracing the idea that failure is inevitable, and therefore optimizing to recover quickly through automation. While automation let us achieve highly available MySQL in a world that forced up to two restarts per week, it did come with its own set of costs. All of our applications had to be changed to include significantly more failure-handling logic than before. Given that the norm in the MySQL development world is to assume that the MySQL instance will be the most stable component in the stack, this switch meant customizing software like JDBC to be more tolerant of our failure-prone environment. However, the benefits of migrating to MoB with Decider were well worth these costs. Once on MoB, the time our team spent on mundane operational tasks dropped by 95%. Our failovers were automated, so an outage of a single database task no longer paged a human. The main upshot of this new automation was that we had a lot more free time to spend on improving other parts of the infrastructure. Such improvements had a cascading effect: the more time we saved, the more time we were able to spend on optimizing and automating other tedious work. Eventually, we were able to automate schema changes, causing the cost of total operational maintenance of the Ads Database to drop by nearly 95%. Some might say that we had successfully automated ourselves out of this job. The hardware side of our domain also saw improvement. Migrating to MoB freed up considerable resources because we could schedule multiple MySQL instances on the same machines, which improved utilization of our hardware. In total, we were able to free up about 60% of our hardware. Our team was now flush with hardware and engineering resources. This example demonstrates the wisdom of going the extra mile to deliver a platform rather than replacing existing manual procedures. The next example comes from the cluster infrastructure group, and illustrates some of the more difficult trade-offs you might encounter on your way to automating all the things. Soothing the Pain: Applying Automation to Cluster Turnups Ten years ago, the Cluster Infrastructure SRE team seemed to get a new hire every few months. As it turned out, that was approximately the same frequency at which we turned up a new cluster. Because turning up a service in a new cluster gives new hires exposure to a service’s internals, this task seemed like a natural and useful training tool. The steps taken to get a cluster ready for use were something like the following: Fit out a datacenter building for power and cooling. Install and configure core switches and connections to the backbone. Install a few initial racks of servers. Configure basic services such as DNS and installers, then configure a lock service, storage, and computing. Deploy the remaining racks of machines. Assign user-facing services resources, so their teams can set up the services. Steps 4 and 6 were extremely complex. While basic services like DNS are relatively simple, the storage and compute subsystems at that time were still in heavy development, so new flags, components, and optimizations were added weekly. Some services had more than a hundred different component subsystems, each with a complex web of dependencies. Failing to configure one subsystem, or configuring a system or component differently than other deployments, is a customer-impacting outage waiting to happen. In one case, a multi-petabyte Bigtable cluster was configured to not use the first (logging) disk on 12-disk systems, for latency reasons. A year later, some automation assumed that if a machine’s first disk wasn’t being used, that machine didn’t have any storage configured; therefore, it was safe to wipe the machine and set it up from scratch. All of the Bigtable data was wiped, instantly. Thankfully we had multiple real-time replicas of the dataset, but such surprises are unwelcome. Automation needs to be careful about relying on implicit "safety" signals. Early automation focused on accelerating cluster delivery. This approach tended to rely upon creative use of SSH for tedious package distribution and service initialization problems. This strategy was an initial win, but those free-form scripts became a cholesterol of technical debt. Detecting Inconsistencies with Prodtest As the numbers of clusters grew, some clusters required hand-tuned flags and settings. As a result, teams wasted more and more time chasing down difficult-to-spot misconfigurations. If a flag that made GFS more responsive to log processing leaked into the default templates, cells with many files could run out of memory under load. Infuriating and time-consuming misconfigurations crept in with nearly every large configuration change. The creative—though brittle—shell scripts we used to configure clusters were neither scaling to the number of people who wanted to make changes nor to the sheer number of cluster permutations that needed to be built. These shell scripts also failed to resolve more significant concerns before declaring that a service was good to take customer-facing traffic, such as: Were all of the service’s dependencies available and correctly configured? Were all configurations and packages consistent with other deployments? Could the team confirm that every configuration exception was desired? Prodtest (Production Test) was an ingenious solution to these unwelcome surprises. We extended the Python unit test framework to allow for unit testing of real-world services. These unit tests have dependencies, allowing a chain of tests, and a failure in one test would quickly abort. Take the test shown in Figure 7-1 as an example. Figure 7-1. ProdTest for DNS Service, showing how one failed test aborts the subsequent chain of tests A given team’s Prodtest was given the cluster name, and it could validate that team’s services in that cluster. Later additions allowed us to generate a graph of the unit tests and their states. This functionality allowed an engineer to see quickly if their service was correctly configured in all clusters, and if not, why. The graph highlighted the failed step, and the failing Python unit test output a more verbose error message. Any time a team encountered a delay due to another team’s unexpected misconfiguration, a bug could be filed to extend their Prodtest. This ensured that a similar problem would be discovered earlier in the future. SREs were proud to be able to assure their customers that all services—both newly turned up services and existing services with new configuration—would reliably serve production traffic. For the first time, our project managers could predict when a cluster could "go live," and had a complete understanding of why each clusters took six or more weeks to go from "network-ready" to "serving live traffic." Out of the blue, SRE received a mission from senior management: In three months, five new clusters will reach network-ready on the same day. Please turn them up in one week. Resolving Inconsistencies Idempotently A "One Week Turnup" was a terrifying mission. We had tens of thousands of lines of shell script owned by dozens of teams. We could quickly tell how unprepared any given cluster was, but fixing it meant that the dozens of teams would have to file hundreds of bugs, and then we had to hope that these bugs would be promptly fixed. We realized that evolving from "Python unit tests finding misconfigurations" to "Python code fixing misconfigurations" could enable us to fix these issues faster. The unit test already knew which cluster we were examining and the specific test that was failing, so we paired each test with a fix. If each fix was written to be idempotent, and could assume that all dependencies were met, resolving the problem should have been easy—and safe—to resolve. Requiring idempotent fixes meant teams could run their "fix script" every 15 minutes without fearing damage to the cluster’s configuration. If the DNS team’s test was blocked on the Machine Database team’s configuration of a new cluster, as soon as the cluster appeared in the database, the DNS team’s tests and fixes would start working. Take the test shown in Figure 7-2 as an example. If TestDnsMonitoringConfigExists fails, as shown, we can call FixDnsMonitoringCreateConfig , which scrapes configuration from a database, then checks a skeleton configuration file into our revision control system. Then TestDnsMonitoringConfigExists passes on retry, and the TestDnsMonitoringConfigPushed test can be attempted. If the test fails, the FixDnsMonitoringPushConfig step runs. If a fix fails multiple times, the automation assumes that the fix failed and stops, notifying the user. Armed with these scripts, a small group of engineers could ensure that we could go from "The network works, and machines are listed in the database" to "Serving 1% of websearch and ads traffic" in a matter of a week or two. At the time, this seemed to be the apex of automation technology. Looking back, this approach was deeply flawed; the latency between the test, the fix, and then a second test introduced flaky tests that sometimes worked and sometimes failed. Not all fixes were naturally idempotent, so a flaky test that was followed by a fix might render the system in an inconsistent state. Figure 7-2. ProdTest for DNS Service, showing that one failed test resulted in only running one fix The Inclination to Specialize Automation processes can vary in three respects: Competence , i.e., their accuracy Latency , how quickly all steps are executed when initiated Relevance , or proportion of real-world process covered by automation We began with a process that was highly competent (maintained and run by the service owners), high-latency (the service owners performed the process in their spare time or assigned it to new engineers), and very relevant (the service owners knew when the real world changed, and could fix the automation). To reduce turnup latency, many service owning teams instructed a single "turnup team" what automation to run. The turnup team used tickets to start each stage in the turnup so that we could track the remaining tasks, and who those tasks were assigned to. If the human interactions regarding automation modules occurred between people in the same room, cluster turnups could happen in a much shorter time. Finally, we had our competent, accurate, and timely automation process! But this state didn’t last long. The real world is chaotic: software, configuration, data, etc. changed, resulting in over a thousand separate changes a day to affected systems. The people most affected by automation bugs were no longer domain experts, so the automation became less relevant (meaning that new steps were missed) and less competent (new flags might have caused automation to fail). However, it took a while for this drop in quality to impact velocity. Automation code, like unit test code, dies when the maintaining team isn’t obsessive about keeping the code in sync with the codebase it covers. The world changes around the code: the DNS team adds new configuration options, the storage team changes their package names, and the networking team needs to support new devices. By relieving teams who ran services of the responsibility to maintain and run their automation code, we created ugly organizational incentives: A team whose primary task is to speed up the current turnup has no incentive to reduce the technical debt of the service-owning team running the service in production later. A team not running automation has no incentive to build systems that are easy to automate. A product manager whose schedule is not affected by low-quality automation will always prioritize new features over simplicity and automation. The most functional tools are usually written by those who use them. A similar argument applies to why product development teams benefit from keeping at least some operational awareness of their systems in production. Turnups were again high-latency, inaccurate, and incompetent—the worst of all worlds. However, an unrelated security mandate allowed us out of this trap. Much of distributed automation relied at that time on SSH. This is clumsy from a security perspective, because people must have root on many machines to run most commands. A growing awareness of advanced, persistent security threats drove us to reduce the privileges SREs enjoyed to the absolute minimum they needed to do their jobs. We had to replace our use of sshd with an authenticated, ACL-driven, RPC-based Local Admin Daemon, also known as Admin Servers, which had permissions to perform those local changes. As a result, no one could install or modify a server without an audit trail. Changes to the Local Admin Daemon and the Package Repo were gated on code reviews, making it very difficult for someone to exceed their authority; giving someone the access to install packages would not let them view colocated logs. The Admin Server logged the RPC requestor, any parameters, and the results of all RPCs to enhance debugging and security audits. Service-Oriented Cluster-Turnup In the next iteration, Admin Servers became part of service teams’ workflows, both as related to the machine-specific Admin Servers (for installing packages and rebooting) and cluster-level Admin Servers (for actions like draining or turning up a service). SREs moved from writing shell scripts in their home directories to building peer-reviewed RPC servers with fine-grained ACLs. Later on, after the realization that turnup processes had to be owned by the teams that owned the services fully sank in, we saw this as a way to approach cluster turnup as a Service-Oriented Architecture (SOA) problem: service owners would be responsible for creating an Admin Server to handle cluster turnup/turndown RPCs, sent by the system that knew when clusters were ready. In turn, each team would provide the contract (API) that the turnup automation needed, while still being free to change the underlying implementation. As a cluster reached "network-ready," automation sent an RPC to each Admin Server that played a part in turning up the cluster. We now have a low-latency, competent, and accurate process; most importantly, this process has stayed strong as the rate of change, the number of teams, and the number of services seem to double each year. As mentioned earlier, our evolution of turnup automation followed a path: Operator-triggered manual action (no automation) Operator-written, system-specific automation Externally maintained generic automation Internally maintained, system-specific automation Autonomous systems that need no human intervention While this evolution has, broadly speaking, been a success, the Borg case study illustrates another way we have come to think of the problem of automation. Borg: Birth of the Warehouse-Scale Computer Another way to understand the development of our attitude toward automation, and when and where that automation is best deployed, is to consider the history of the development of our cluster management systems. 31 Like MySQL on Borg, which demonstrated the success of converting manual operations to automatic ones, and the cluster turnup process, which demonstrated the downside of not thinking carefully enough about where and how automation was implemented, developing cluster management also ended up demonstrating another lesson about how automation should be done. Like our previous two examples, something quite sophisticated was created as the eventual result of continuous evolution from simpler beginnings. Google’s clusters were initially deployed much like everyone else’s small networks of the time: racks of machines with specific purposes and heterogeneous configurations. Engineers would log in to some well-known “master” machine to perform administrative tasks; “golden” binaries and configuration lived on these masters. As we had only one colo provider, most naming logic implicitly assumed that location. As production grew, and we began to use multiple clusters, different domains (cluster names) entered the picture. It became necessary to have a file describing what each machine did, which grouped machines under some loose naming strategy. This descriptor file, in combination with the equivalent of a parallel SSH, allowed us to reboot (for example) all the search machines in one go. Around this time, it was common to get tickets like “search is done with machine x1 , crawl can have the machine now.” Automation development began. Initially automation consisted of simple Python scripts for operations such as the following: Service management: keeping services running (e.g., restarts after segfaults) Tracking what services were supposed to run on which machines Log message parsing: SSHing into each machine and looking for regexps Automation eventually mutated into a proper database that tracked machine state, and also incorporated more sophisticated monitoring tools. With the union set of the automation available, we could now automatically manage much of the lifecycle of machines: noticing when machines were broken, removing the services, sending them to repair, and restoring the configuration when they came back from repair. But to take a step back, this automation was useful yet profoundly limited, due to the fact that abstractions of the system were relentlessly tied to physical machines. We needed a new approach, hence Borg [Ver15] was born: a system that moved away from the relatively static host/port/job assignments of the previous world, toward treating a collection of machines as a managed sea of resources. Central to its success—and its conception—was the notion of turning cluster management into an entity for which API calls could be issued, to some central coordinator. This liberated extra dimensions of efficiency, flexibility, and reliability: unlike the previous model of machine “ownership,” Borg could allow machines to schedule, for example, batch and user-facing tasks on the same machine. This functionality ultimately resulted in continuous and automatic operating system upgrades with a very small amount of constant 32 effort—effort that does not scale with the total size of production deployments. Slight deviations in machine state are now automatically fixed; brokenness and lifecycle management are essentially no-ops for SRE at this point. Thousands of machines are born, die, and go into repairs daily with no SRE effort. To echo the words of Ben Treynor Sloss: by taking the approach that this was a software problem, the initial automation bought us enough time to turn cluster management into something autonomous, as opposed to automated. We achieved this goal by bringing ideas related to data distribution, APIs, hub-and-spoke architectures, and classic distributed system software development to bear upon the domain of infrastructure management. An interesting analogy is possible here: we can make a direct mapping between the single machine case and the development of cluster management abstractions. In this view, rescheduling on another machine looks a lot like a process moving from one CPU to another: of course, those compute resources happen to be at the other end of a network link, but to what extent does that actually matter? Thinking in these terms, rescheduling looks like an intrinsic feature of the system rather than something one would “automate”—humans couldn’t react fast enough anyway. Similarly in the case of cluster turnup: in this metaphor, cluster turnup is simply additional schedulable capacity, a bit like adding disk or RAM to a single computer. However, a single-node computer is not, in general, expected to continue operating when a large number of components fail. The global computer is—it must be self-repairing to operate once it grows past a certain size, due to the essentially statistically guaranteed large number of failures taking place every second. This implies that as we move systems up the hierarchy from manually triggered, to automatically triggered, to autonomous, some capacity for self-introspection is necessary to survive. Reliability Is the Fundamental Feature Of course, for effective troubleshooting, the details of internal operation that the introspection relies upon should also be exposed to the humans managing the overall system. Analogous discussions about the impact of automation in the noncomputer domain—for example, in airplane flight 33 or industrial applications—often point out the downside of highly effective automation: 34 human operators are progressively more relieved of useful direct contact with the system as the automation covers more and more daily activities over time. Inevitably, then, a situation arises in which the automation fails, and the humans are now unable to successfully operate the system. The fluidity of their reactions has been lost due to lack of practice, and their mental models of what the system should be doing no longer reflect the reality of what it is doing. 35 This situation arises more when the system is nonautonomous—i.e., where automation replaces manual actions, and the manual actions are presumed to be always performable and available just as they were before. Sadly, over time, this ultimately becomes false: those manual actions are not always performable because the functionality to permit them no longer exists. We, too, have experienced situations where automation has been actively harmful on a number of occasions—see Automation: Enabling Failure at Scale —but in Google’s experience, there are more systems for which automation or autonomous behavior are no longer optional extras. As you scale, this is of course the case, but there are still strong arguments for more autonomous behavior of systems irrespective of size. Reliability is the fundamental feature, and autonomous, resilient behavior is one useful way to get that. Recommendations You might read the examples in this chapter and decide that you need to be Google-scale before you have anything to do with automation whatsoever. This is untrue, for two reasons: automation provides more than just time saving, so it’s worth implementing in more cases than a simple time-expended versus time-saved calculation might suggest. But the approach with the highest leverage actually occurs in the design phase: shipping and iterating rapidly might allow you to implement functionality faster, yet rarely makes for a resilient system. Autonomous operation is difficult to convincingly retrofit to sufficiently large systems, but standard good practices in software engineering will help considerably: having decoupled subsystems, introducing APIs, minimizing side effects, and so on. Automation: Enabling Failure at Scale Google runs over a dozen of its own large datacenters, but we also depend on machines in many third-party colocation facilities (or "colos"). Our machines in these colos are used to terminate most incoming connections, or as a cache for our own Content Delivery Network, in order to lower end-user latency. At any point in time, a number of these racks are being installed or decommissioned; both of these processes are largely automated. One step during decommission involves overwriting the full content of the disk of all the machines in the rack, after which point an independent system verifies the successful erase. We call this process "Diskerase." Once upon a time, the automation in charge of decommissioning a particular rack failed, but only after the Diskerase step had completed successfully. Later, the decommission process was restarted from the beginning, to debug the failure. On that iteration, when trying to send the set of machines in the rack to Diskerase, the automation determined that the set of machines that still needed to be Diskerased was (correctly) empty. Unfortunately, the empty set was used as a special value, interpreted to mean "everything." This means the automation sent almost all the machines we have in all colos to Diskerase. Within minutes, the highly efficient Diskerase wiped the disks on all machines in our CDN, and the machines were no longer able to terminate connections from users (or do anything else useful). We were still able to serve all the users from our own datacenters, and after a few minutes the only effect visible externally was a slight increase in latency. As far as we could tell, very few users noticed the problem at all, thanks to good capacity planning (at least we got that right!). Meanwhile, we spent the better part of two days reinstalling the machines in the affected colo racks; then we spent the following weeks auditing and adding more sanity checks—including rate limiting—into our automation, and making our decommission workflow idempotent. 26 For readers who already feel they precisely understand the value of automation, skip ahead to The Value for Google SRE . However, note that our description contains some nuances that might be useful to keep in mind while reading the rest of the chapter. 27 The expertise acquired in building such automation is also valuable in itself; engineers both deeply understand the existing processes they have automated and can later automate novel processes more quickly. 28 See the following XKCD cartoon: https://xkcd.com/1205/ . 29 See, for example, https://blog.engineyard.com/2014/pets-vs-cattle . 30 Of course, not every system that needs to be managed actually provides callable APIs for management—forcing some tooling to use, e.g., CLI invocations or automated website clicks. 31 We have compressed and simplified this history to aid understanding. 32 As in a small, unchanging number. 33 See, e.g., https://en.wikipedia.org/wiki/Air_France_Flight_447 . 34 See, e.g., [Bai83] and [Sar97] . 35 This is yet another good reason for regular practice drills; see Disaster Role Playing . Previous Chapter 6 - Monitoring Distributed Systems Next Chapter 8 - Release Engineering Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:20
https://goo.gle/sre-classroom-distributed-pubsub
Google SRE classroom - Distributed Publish-subscribe workshop Site Reliability Engineering Jump to Content Home Resources Latest resources Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Distributed PubSub Distributed Image Server The Art of SLO Latest resources Resources overview Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Books overview Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa Mobaa overview 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Classroom overview Distributed PubSub Distributed Image Server The Art of SLO Books Careers Cloud Local Prodcast Spotlight Site Reliability Engineering Jump to Content SRE Classroom: Distributed PubSub Introduction SRE Classroom: Distributed PubSub is a workshop developed by Google' Site Reliability Engineering group. The goals of this workshop are to (1) introduce participants to the principles of non-abstract large systems design ( NALSD ), and (2) provide hands-on experiences with applying these principles to the design and evaluation of these systems. We consider NALSD a concept fundamental to SRE, and understanding its principles provides a basis for having meaningful conversations about the design and operation of large software systems. In the first theoretical part of the workshop, participants learn about some foundational large system design principles and concepts. Topics include correctness, reliability, performance, different inter-system communication styles, and more. We introduce the problem requirements in detail and walk through the first parts of an example solution. The practical part of this workshop asks participants to apply the principles they have learned to develop a Publish-Subscribe system that meets certain performance and correctness requirements and Service Level Objectives (SLOs) . The workshop concludes with a detailed example solution, as well as a discussion of the system's inputs and SLOs. Target Audience This workshop includes technical content, and its primary audience is software developers and site reliability engineers. We have also welcomed folks in various other roles, including product management and senior engineering management, to this workshop. The workshop includes hands-on work well-suited for groups of five, and scales well from 1 to 20 groups—as many as a hundred participants! Workshop Materials Distributed PubSub – Slides This presentation is the backbone of the workshop. It contains the training content that prepares participants for the practical exercises. There are detailed speaker notes for presenters that make it possible to deliver the workshop with minimum preparation. We also provide a Presenter Guide with additional tips and guidance for leading the workshop. Presentation Slides PDF version of these slides, without speaker notes Presenter Guide - A4 Presenter Guide - Letter Distributed PubSub – Participant Resources The Participant Handout contains additional details about the exercise. The Latency Numbers Everyone Should Know handout contains reference numbers that are useful for back-of-the-envelope calculations. The NALSD Workbook contains reference material that is useful both during the workshop and more generally when applying the NALSD approach to solving system design problems. Participant Handout - A4 Participant Handout - Letter Latency Numbers Everyone Should Know - A4 Latency Numbers Everyone Should Know - Letter NALSD Workbook - A4 NALSD Workbook - Letter Distributed PubSub – Facilitator Resources The Facilitator Guide contains tips and guidance for facilitators of the workshop. Facilitators should read this ahead of time to prepare for making the workshop an awesome experience for everyone involved. The breakout template can be used to set up breakout groups during the hands-on portion of the workshop. This preparation step can be done by either the facilitators or the presenter – be sure to coordinate and make a game plan ahead of time! Facilitator Guide - A4 Facilitator Guide - Letter Breakout Template Additional Resources We aim to develop durable SRE Classroom materials for folks learning about NALSD. If you find this useful, tell us what you want to see in future exercises. Please use the issue tracker to send us your thoughts and suggestions. Alternatively, send us a tweet at @googlesre . Visit the SRE Classroom page to learn more about NALSD and SRE. Licensing The workshop documents above are released under the Creative Commons CC-BY-4.0 license for anyone to use and reuse, as long as Google is credited as the original author. If you want to suggest improvements, have any problems with the content, or just want to ask a question, please create a bug in our issue tracker component . Follow us About Google Google products Privacy Terms Help
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-amazon
Usage Statistics and Market Share of Amazon as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Technology Changes Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > Amazon Usage statistics of Amazon as DNS server provider Request an extensive Amazon market report. Learn more These diagrams show the usage statistics of Amazon as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. Amazon is used as DNS server provider by 3.7% of all the websites. Historical trend This diagram shows the historical trend in the percentage of websites using Amazon. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of Amazon compared to all other DNS server providers in our Amazon market report . Market position This diagram shows the market position of Amazon in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using Amazon Amazon.com Github.com Netflix.com X.com Spotify.com used on subdomain Samsung.com used on subdomain Vimeo.com Reddit.com Zoom.us used on subdomain Dropbox.com Random selection of sites using Amazon Omroepwest.nl Otonasalone.jp 7ypro.vip Ddasports.com Forkast.news Sites using Amazon only recently Canva.com Lavanguardia.com Sentry.io used on subdomain Medium.com used on subdomain Nature.com used on subdomain More examples of sites You can find more examples of sites using Amazon in our Amazon market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of Amazon with SiteGround and Tilda and Bigcommerce . Free technology usage monitoring service Get a notification when a top site starts using Amazon. Share this page Technology Brief Amazon Category: DNS Server Providers Amazon is a US-based e-commerce and cloud computing provider. Website: aws.amazon.com advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-grouparuba
Usage Statistics and Market Share of Aruba Group as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Subcategories History Technology Changes Market Top Site Usage Market Position Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > Aruba Group Usage statistics of Aruba Group as DNS server provider Request an extensive Aruba Group market report. Learn more These diagrams show the usage statistics of Aruba Group as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. Aruba Group is used as DNS server provider by 1.3% of all the websites. Subcategories of Aruba Group This diagram shows the percentages of websites using various subcategories of Aruba Group. How to read the diagram: Aruba is used by 84.9% of all the websites who use Aruba Group Aruba 84.9% Forpsi 14.1% General Registry 1.0% W3Techs.com, 13 January 2026 Percentages of websites using various subcategories of Aruba Group Note: a website may use more than one subcategory of Aruba Group Historical trend This diagram shows the historical trend in the percentage of websites using Aruba Group. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of Aruba Group compared to all other DNS server providers in our Aruba Group market report . Market position This diagram shows the market position of Aruba Group in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using Aruba Group Staseraintv.com Aruba.it Ildolomiti.it Tuttonapoli.net Preghiereperlafamiglia.it Tuttojuve.com Fcinter1908.it Mininterno.net Megaknihy.cz Tuttomercatoweb.com Random selection of sites using Aruba Group Zskladno-amalska.cz Fitp.it Pizzone.com Accublind.cz Appress.it Sites using Aruba Group only recently Acquadanuotare.it Caniledichieri.org Comunetrabia.it Liberacr.it Jasalegalisir.com More examples of sites You can find more examples of sites using Aruba Group in our Aruba Group market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of Aruba Group with GoDaddy Group and WebSupport and SiteGround . Free technology usage monitoring service Get a notification when a top site starts using Aruba Group. Share this page Technology Brief Aruba Group Category: DNS Server Providers Aruba is an Italian internet service company. Website: aruba.it advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/privacy_policy
Privacy Policy advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here W3Techs - Privacy Policy Bottom Line We fully respect your privacy. We do not collect any personal information. If you sign-up for a user account, you may cancel your account at any time, in which case all data directly related to that account is removed. We use a few very common practices with data obtained during your browsing session in order to make this service possible, to monitor its performance and to find ways to improve it. Introduction Q-Success, the company that runs this site, respects each individual's right to personal privacy. We will collect and use information through our web site only in the ways disclosed in this statement. This statement applies to information collected at W3Techs.com. Information Collection Q-Success only collects data that is not personally identifiable by us. We use web server logfiles and logging tools to improve the performance of the site and to monitor its use. We offer the possibility to provide personal data including an email address during sign-up. We do employ cookies to identify users after sign-up and login. A cookie is a small text file that our web server places on a user's computer hard drive to store some information. The site can be used without accepting cookies, but some functionalities cannot be accessed in that case. If you sign-up for a user account, you may cancel your account at any time. In that case all data directly related to that account is removed, however community contributions such as blog entries, blog comments and forum entries made using that account will remain on the site. We use advertising partners. These partners use cookies and similar techniques to collect data in the ad serving process. However, advertising partners have no access to private data you provide on this site. Advertising partners may use third-party advertising companies to serve ads when you visit our website. These companies may use information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide advertisements about goods and services of interest to you. If you would like more information about this practice and to know your choices about not having this information used by these companies, click here . Information Usage The information collected by Q-Success is used to monitor this site's performance and to find ways to improve it. The information we collect will not be used to create customer profiles based on browsing history. We will not supplement information collected at our web site with data from other sources. Email addresses disclosed by users for sign-up are not used by Q-Success for other purposes and are not divulged to any third party. We do not share data with third parties. The user name you provide during sign-up, which may or may not be your real name, may be visible to others. Other data you provide during sign-up, particularly the email address, will be visible only to site administrators. All information in the user profile other than email address and password is publicly visible unless otherwise noted. If you do not want such information publicly visible, then do not post such information to your user profile. We offer links to other web sites. Please note: When you click on links to other web sites, we encourage you to read their privacy policies. Their standards may differ from ours. If you have any further questions about privacy, please contact us by sending an email to: Office@W3Techs.com . Last Modification of this Page 17 September 2023 About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/cross/dns_server/client_side_language
Usage Survey of DNS Server Providers broken down by Client-side Programming Languages advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Client-side Languages Usage of DNS server providers broken down by client-side programming languages Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by client-side programming languages. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 15.4% of all the websites that use JavaScript as client-side programming language.     Cloudflare 15.4% 15.4% 11.7%     GoDaddy Group 10.1% 10.2% 7.1%     Newfold Digital Group 4.0% 4.0% 7.2%     W3Techs.com, 13 January 2026 Overall JavaScript Flash Percentages of websites using various DNS server providers broken down by client-side programming languages More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-hostinger
Usage Statistics and Market Share of Hostinger as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Technology Changes Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > Hostinger Usage statistics of Hostinger as DNS server provider Request an extensive Hostinger market report. Learn more These diagrams show the usage statistics of Hostinger as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. Hostinger is used as DNS server provider by 4.0% of all the websites. Historical trend This diagram shows the historical trend in the percentage of websites using Hostinger. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of Hostinger compared to all other DNS server providers in our Hostinger market report . Market position This diagram shows the market position of Hostinger in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using Hostinger Hostinger.com 3lawey.com Kingsofsatta.com Quuaas.xyz Elvebredd.com Ssoid.net.in Modderguy.in Sympla.com.br used on subdomain Editprotips.in Razzsumanphotography.com Random selection of sites using Hostinger Josephgomesforuscongress.com Vedicvichar.com Oranutrition.co.nz Buenapizzaria.com.br Yaathi.in Sites using Hostinger only recently Bisadanedu.com Lorangeteam.com Nashipisni.com Pharmabharat.com Homoeopathycollegegoa.org More examples of sites You can find more examples of sites using Hostinger in our Hostinger market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of Hostinger with United Internet and SiteGround and Gandi . Free technology usage monitoring service Get a notification when a top site starts using Hostinger. Share this page Technology Brief Hostinger Category: DNS Server Providers Hostinger is an internet services provider. This includes its brands 000webhost, Hosting24, Niagahoster, Weblink and Zyro. Website: hostinger.com Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/faq#adv
Frequenty Asked Questions advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Frequently Asked Questions If you have any questions about our service, this is a good place to look for answers.   Data collection How do you know which technologies are used by a site? Primarily, we use information provided by the site itself when downloading web pages. In other words, we fetch web pages very much like a search engine, and analyze the results. Additionally, we use publicly available information from sources such as Tranco , Google , Microsoft and ipinfo.io . How exactly does your website analyzer work? We search for specific patterns in the web pages that identify the usage of technologies, similarly to the way a virus scanner searches for patterns in a file to identify viruses. We use a combination of regular expressions and DOM traversal for this search. We have identified several thousand indicators for technology usage. These indicators have different priorities, and based on the presence or absence of specific combinations of indicators in a specific context, we come to our conclusions. These are examples of the information used by the indicators: HTML elements of web pages Specific HTML tags, for example the generator meta tag JavaScript code CSS code The URL structure of a site HTTP headers, for example cookies HTTP responses to specific requests, for example compression DNS records Whois information Additionally, we exploit dependencies between technologies. For example, if we find a WordPress site, we know that it is using PHP. A fair share of our data is based on such dependencies. A lot of research was necessary to build the analyzer, and we keep improving it all the time. We want it to be the best possible website analyzer. How accurate is your information? It is impossible for this type of surveys to be 100% accurate, since websites can choose to hide most of their technologies, if they want to. See also our disclaimer for some more information. There is no way to be absolutely sure not to get some errors in the technology identification. We try to find ways to balance the false-positives and the false-negatives (after eliminating as many as possible), and we try to make sure that none of the remaining errors are clustering on one technology rather than another. Our goal is to provide the most accurate and reliable web technology surveys, so that the answer to this question would be: it is as accurate as one can possibly get. We believe that we are not too far away from that goal. How often do you visit a site? That depends on a number of factors, but approximately once a month, some sites less often. Do you analyze only the home page or also inner pages and subdomains? In most cases we crawl deeper, visiting a few sample pages.   Reports How often do you update the reports? All reports on our website are updated daily. Although we don't analyze every site every day (see above), we permanently add new information into our database, and we want new trends to be visible as quickly as possible. The much more extensive technology market reports are generated monthly. Which websites do you count? Do you crawl all the web? For the surveys, we count what we call the relevant web , see our technology overview for more explanations. We do crawl more sites, but we are convinced that our statistics would become less useful and less relevant by including all the typical "throw-away" sites or parked domains or other types of spam sites. In some of the market share reports, the figures don't add up to 100%. How come? That is the case when websites use more than one of the technologies, for example websites may use more than one server-side programming language. We could do the calculations differently, but then a usage of 50% would not necessarily mean that the technology is used by every second site, which we would find quite confusing. Why are your figures sometimes very different to figures published somewhere else? The biggest source of confusion comes from the fact that we measure technologies used for websites, whereas other surveys measure something else. For example the well known Tiobe Index measures overall popularity of programming languages. C is more popular than PHP in this report, but C is very rarely used to build websites. Another example is Distrowatch , which measures popularity of Linux distributions, but that includes popularity of desktop installations. Therefore their ranking is different to ours. Other figures published on the usage of web technologies often are based on different samples. For example they may use very small random samples, or samples favoring specific geographical regions, or they may use only a small fraction of the web say the top 10.000 sites, or they may include subdomains or even individual web pages in their counts, or they may even be based on polls of their website visitors. If there are no such differences in the measurement techniques, then there are certainly still differences in the website analyzing methods. We know for sure that a lot of research has been done to develop our analyzing methods, we are not so sure about others.   Advanced Reports What are these breakdown and segmentation reports in the navigation bar? In the breakdown reports, you can see the usage of combinations of technologies, e.g. which Javascript libraries are used together with which content management systems . This is an example of an overview breakdown report, showing the most popular technologies of two categories. If you want more details, you have to navigate to a specific technology, e.g. Wordpress , and then click on Javascript Libraries under the Breakdown menu. Within the Wordpress report, if you click on Javascript Libraries under the Segmentation menu you get a similar report, showing the distribution of Javascript libraries among all the websites that use Wordpress as content management system. You can switch between breakdown report and segmentation report by clicking on the Related Reports menu entry. Breakdown and segmentation reports are very powerful analysis tools. You probably have to play around a bit to explore all the possibilities and to find your way through the navigation to the reports you want. Use this as an example: if you want to know which web server technologies are used in Kyrgyzstan, then navigate from the Technologies overview to the Top Level Domain report. Then scroll all the way down to .kg for Kyrgyzstan (or use Ctrl-F in your browser to find it quickly) and click on it. Next click on Web Servers under the Segmentation menu you see the report you wanted. Please be aware that some technologies have a very low representation in our sample. Breakdown and segmentation reports may have a high statistical variance in these cases, in other words the figures may be unreliable. For instance, we know of only one site, that uses Neapolitan (Wikipedia). Don't expect any useful statistics from such a data set.   Any other questions If you have more questions, please feel free to post them in the forums or send them to us directly , if you prefer. About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/overview/ssl_certificate
Usage Statistics and Market Share of SSL Certificate Authorities for Websites, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Usage History Market Share History Market Top Site Usage Market Position Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > SSL Certificate Authorities Usage statistics and market shares of SSL certificate authorities for websites Request an extensive SSL certificate authorities market report . Learn more This diagram shows the percentages of websites using various SSL certificate authorities. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. How to read the diagram: 2.7% of the websites use none of the SSL certificate authorities that we monitor. Let’s Encrypt is used by 60.7% of all the websites, that is a SSL certificate authority market share of 64.1%. None 2.7% Invalid Domain 1.9% Certificate Expired 0.7% Unrecognized Authority 0.1% Let’s Encrypt 60.7% 64.1% GlobalSign 22.5% 23.7% Sectigo 5.5% 5.8% GoDaddy Group 3.7% 3.9% DigiCert Group 1.8% 1.9% Actalis 0.6% 0.7% Certum 0.5% 0.6% Secom Trust 0.3% 0.3% W3Techs.com, 13 January 2026 absolute usage percentage market share Percentages of websites using various SSL certificate authorities Note: a website may use more than one SSL certificate authority The following SSL certificate authorities have a market share of less than 0.1% SSL.com Harica IdenTrust TWCA WISeKey Group SwissSign ZeroSSL D-Trust Chunghwa Telecom GoGetSSL Buypass Deutsche Telekom LevelBlue Gandi Microsec Certigna Entrust Izenpe Network Solutions CertSIGN Hongkong Post Amazon TÜBİTAK Disig NetLock E-Tugra Camerfirma Firmaprofesional StartCom Logius Is there a technology missing? Registered users can make a proposal to add a technology. Do you want to stay informed about this survey? Use our monthly technology survey RSS Feed . Registered users can also subscribe to a monthly technology survey email. Share this page Technology Brief SSL Certificate Authorities SSL certificate authorities are institutions that issue SSL certificates. We monitor authorities that are trusted by major browsers. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more No related forum entry yet SSL certificate authorities forum advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/overview/markup_language
Usage Statistics and Market Share of Markup Languages for Websites, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends History Market Top Site Usage Market Position Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > Markup Languages Usage statistics of markup languages for websites This diagram shows the percentages of websites using various markup languages. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. How to read the diagram: HTML is used by 97.3% of all the websites whose markup language we know. HTML 97.3% XHTML 3.6% W3Techs.com, 13 January 2026 Percentages of websites using various markup languages Note: a website may use more than one markup language Is there a technology missing? Registered users can make a proposal to add a technology. Do you want to stay informed about this survey? Use our monthly technology survey RSS Feed . Registered users can also subscribe to a monthly technology survey email. Share this page Technology Brief Markup Languages A markup language is a computer language used to describe web pages. No related forum entry yet markup languages forum advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://sre.google/sre-book/automation-at-google#id-7pEuOFgFdt4Tz
Google SRE - Google Automation For Reliability Chapter 7 - The Evolution of Automation at Google Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolution of Automation at Google Written by Niall Murphy with John Looney and Michael Kacirek Edited by Betsy Beyer Besides black art, there is only automation and mechanization. Federico García Lorca (1898–1936), Spanish poet and playwright For SRE, automation is a force multiplier, not a panacea. Of course, just multiplying force does not naturally change the accuracy of where that force is applied: doing automation thoughtlessly can create as many problems as it solves. Therefore, while we believe that software-based automation is superior to manual operation in most circumstances, better than either option is a higher-level system design requiring neither of them—an autonomous system. Or to put it another way, the value of automation comes from both what it does and its judicious application. We’ll discuss both the value of automation and how our attitude has evolved over time. The Value of Automation What exactly is the value of automation? 26 Consistency Although scale is an obvious motivation for automation, there are many other reasons to use it. Take the example of university computing systems, where many systems engineering folks started their careers. Systems administrators of that background were generally charged with running a collection of machines or some software, and were accustomed to manually performing various actions in the discharge of that duty. One common example is creating user accounts; others include purely operational duties like making sure backups happen, managing server failover, and small data manipulations like changing the upstream DNS servers’ resolv.conf , DNS server zone data, and similar activities. Ultimately, however, this prevalence of manual tasks is unsatisfactory for both the organizations and indeed the people maintaining systems in this way. For a start, any action performed by a human or humans hundreds of times won’t be performed the same way each time: even with the best will in the world, very few of us will ever be as consistent as a machine. This inevitable lack of consistency leads to mistakes, oversights, issues with data quality, and, yes, reliability problems. In this domain—the execution of well-scoped, known procedures the value of consistency is in many ways the primary value of automation. A Platform Automation doesn’t just provide consistency. Designed and done properly, automatic systems also provide a platform that can be extended, applied to more systems, or perhaps even spun out for profit. 27 (The alternative, no automation, is neither cost effective nor extensible: it is instead a tax levied on the operation of a system.) A platform also centralizes mistakes . In other words, a bug fixed in the code will be fixed there once and forever, unlike a sufficiently large set of humans performing the same procedure, as discussed previously. A platform can be extended to perform additional tasks more easily than humans can be instructed to perform them (or sometimes even realize that they have to be done). Depending on the nature of the task, it can run either continuously or much more frequently than humans could appropriately accomplish the task, or at times that are inconvenient for humans. Furthermore, a platform can export metrics about its performance, or otherwise allow you to discover details about your process you didn’t know previously, because these details are more easily measurable within the context of a platform. Faster Repairs There’s an additional benefit for systems where automation is used to resolve common faults in a system (a frequent situation for SRE-created automation). If automation runs regularly and successfully enough, the result is a reduced mean time to repair (MTTR) for those common faults. You can then spend your time on other tasks instead, thereby achieving increased developer velocity because you don’t have to spend time either preventing a problem or (more commonly) cleaning up after it. As is well understood in the industry, the later in the product lifecycle a problem is discovered, the more expensive it is to fix; see Testing for Reliability . Generally, problems that occur in actual production are most expensive to fix, both in terms of time and money, which means that an automated system looking for problems as soon as they arise has a good chance of lowering the total cost of the system, given that the system is sufficiently large. Faster Action In the infrastructural situations where SRE automation tends to be deployed, humans don’t usually react as fast as machines. In most common cases, where, for example, failover or traffic switching can be well defined for a particular application, it makes no sense to effectively require a human to intermittently press a button called “Allow system to continue to run.” (Yes, it is true that sometimes automatic procedures can end up making a bad situation worse, but that is why such procedures should be scoped over well-defined domains.) Google has a large amount of automation; in many cases, the services we support could not long survive without this automation because they crossed the threshold of manageable manual operation long ago. Time Saving Finally, time saving is an oft-quoted rationale for automation. Although people cite this rationale for automation more than the others, in many ways the benefit is often less immediately calculable. Engineers often waver over whether a particular piece of automation or code is worth writing, in terms of effort saved in not requiring a task to be performed manually versus the effort required to write it. 28 It’s easy to overlook the fact that once you have encapsulated some task in automation, anyone can execute the task. Therefore, the time savings apply across anyone who would plausibly use the automation. Decoupling operator from operation is very powerful. Joseph Bironas, an SRE who led Google’s datacenter turnup efforts for a time, forcefully argued: "If we are engineering processes and solutions that are not automatable, we continue having to staff humans to maintain the system. If we have to staff humans to do the work, we are feeding the machines with the blood, sweat, and tears of human beings. Think The Matrix with less special effects and more pissed off System Administrators." The Value for Google SRE All of these benefits and trade-offs apply to us just as much as anyone else, and Google does have a strong bias toward automation. Part of our preference for automation springs from our particular business challenges: the products and services we look after are planet-spanning in scale, and we don’t typically have time to engage in the same kind of machine or service hand-holding common in other organizations. 29 For truly large services, the factors of consistency, quickness, and reliability dominate most conversations about the trade-offs of performing automation. Another argument in favor of automation, particularly in the case of Google, is our complicated yet surprisingly uniform production environment, described in The Production Environment at Google, from the Viewpoint of an SRE . While other organizations might have an important piece of equipment without a readily accessible API, software for which no source code is available, or another impediment to complete control over production operations, Google generally avoids such scenarios. We have built APIs for systems when no API was available from the vendor. Even though purchasing software for a particular task would have been much cheaper in the short term, we chose to write our own solutions, because doing so produced APIs with the potential for much greater long-term benefits. We spent a lot of time overcoming obstacles to automatic system management, and then resolutely developed that automatic system management itself. Given how Google manages its source code [Pot16] , the availability of that code for more or less any system that SRE touches also means that our mission to “own the product in production” is much easier because we control the entirety of the stack. Of course, although Google is ideologically bent upon using machines to manage machines where possible, reality requires some modification of our approach. It isn’t appropriate to automate every component of every system, and not everyone has the ability or inclination to develop automation at a particular time. Some essential systems started out as quick prototypes, not designed to last or to interface with automation. The previous paragraphs state a maximalist view of our position, but one that we have been broadly successful at putting into action within the Google context. In general, we have chosen to create platforms where we could, or to position ourselves so that we could create platforms over time. We view this platform-based approach as necessary for manageability and scalability. The Use Cases for Automation In the industry, automation is the term generally used for writing code to solve a wide variety of problems, although the motivations for writing this code, and the solutions themselves, are often quite different. More broadly, in this view, automation is “meta-software”—software to act on software. As we implied earlier, there are a number of use cases for automation. Here is a non-exhaustive list of examples: User account creation Cluster turnup and turndown for services Software or hardware installation preparation and decommissioning Rollouts of new software versions Runtime configuration changes A special case of runtime config changes: changes to your dependencies This list could continue essentially ad infinitum . Google SRE’s Use Cases for Automation In Google, we have all of the use cases just listed, and more. However, within Google SRE, our primary affinity has typically been for running infrastructure, as opposed to managing the quality of the data that passes over that infrastructure. This line isn’t totally clear—for example, we care deeply if half of a dataset vanishes after a push, and therefore we alert on coarse-grain differences like this, but it’s rare for us to write the equivalent of changing the properties of some arbitrary subset of accounts on a system. Therefore, the context for our automation is often automation to manage the lifecycle of systems, not their data: for example, deployments of a service in a new cluster. To this extent, SRE’s automation efforts are not far off what many other people and organizations do, except that we use different tools to manage it and have a different focus (as we’ll discuss). Widely available tools like Puppet, Chef, cfengine, and even Perl, which all provide ways to automate particular tasks, differ mostly in terms of the level of abstraction of the components provided to help the act of automating. A full language like Perl provides POSIX-level affordances, which in theory provide an essentially unlimited scope of automation across the APIs accessible to the system, 30 whereas Chef and Puppet provide out-of-the-box abstractions with which services or other higher-level entities can be manipulated. The trade-off here is classic: higher-level abstractions are easier to manage and reason about, but when you encounter a “leaky abstraction,” you fail systemically, repeatedly, and potentially inconsistently. For example, we often assume that pushing a new binary to a cluster is atomic; the cluster will either end up with the old version, or the new version. However, real-world behavior is more complicated: that cluster’s network can fail halfway through; machines can fail; communication to the cluster management layer can fail, leaving the system in an inconsistent state; depending on the situation, new binaries could be staged but not pushed, or pushed but not restarted, or restarted but not verifiable. Very few abstractions model these kinds of outcomes successfully, and most generally end up halting themselves and calling for intervention. Truly bad automation systems don’t even do that. SRE has a number of philosophies and products in the domain of automation, some of which look more like generic rollout tools without particularly detailed modeling of higher-level entities, and some of which look more like languages for describing service deployment (and so on) at a very abstract level. Work done in the latter tends to be more reusable and be more of a common platform than the former, but the complexity of our production environment sometimes means that the former approach is the most immediately tractable option. A Hierarchy of Automation Classes Although all of these automation steps are valuable, and indeed an automation platform is valuable in and of itself, in an ideal world, we wouldn’t need externalized automation. In fact, instead of having a system that has to have external glue logic, it would be even better to have a system that needs no glue logic at all , not just because internalization is more efficient (although such efficiency is useful), but because it has been designed to not need glue logic in the first place. Accomplishing that involves taking the use cases for glue logic—generally “first order” manipulations of a system, such as adding accounts or performing system turnup—and finding a way to handle those use cases directly within the application. As a more detailed example, most turnup automation at Google is problematic because it ends up being maintained separately from the core system and therefore suffers from “bit rot,” i.e., not changing when the underlying systems change. Despite the best of intentions, attempting to more tightly couple the two (turnup automation and the core system) often fails due to unaligned priorities, as product developers will, not unreasonably, resist a test deployment requirement for every change. Secondly, automation that is crucial but only executed at infrequent intervals and therefore difficult to test is often particularly fragile because of the extended feedback cycle. Cluster failover is one classic example of infrequently executed automation: failovers might only occur every few months, or infrequently enough that inconsistencies between instances are introduced. The evolution of automation follows a path: 1) No automation Database master is failed over manually between locations. 2) Externally maintained system-specific automation An SRE has a failover script in his or her home directory. 3) Externally maintained generic automation The SRE adds database support to a "generic failover" script that everyone uses. 4) Internally maintained system-specific automation The database ships with its own failover script. 5) Systems that don’t need any automation The database notices problems, and automatically fails over without human intervention. SRE hates manual operations, so we obviously try to create systems that don’t require them. However, sometimes manual operations are unavoidable. There is additionally a subvariety of automation that applies changes not across the domain of specific system-related configuration, but across the domain of production as a whole. In a highly centralized proprietary production environment like Google’s, there are a large number of changes that have a non–service-specific scope—e.g., changing upstream Chubby servers, a flag change to the Bigtable client library to make access more reliable, and so on—which nonetheless need to be safely managed and rolled back if necessary. Beyond a certain volume of changes, it is infeasible for production-wide changes to be accomplished manually, and at some time before that point, it’s a waste to have manual oversight for a process where a large proportion of the changes are either trivial or accomplished successfully by basic relaunch-and-check strategies. Let’s use internal case studies to illustrate some of the preceding points in detail. The first case study is about how, due to some diligent, far-sighted work, we managed to achieve the self-professed nirvana of SRE: to automate ourselves out of a job. Automate Yourself Out of a Job: Automate ALL the Things! For a long while, the Ads products at Google stored their data in a MySQL database. Because Ads data obviously has high reliability requirements, an SRE team was charged with looking after that infrastructure. From 2005 to 2008, the Ads Database mostly ran in what we considered to be a mature and managed state. For example, we had automated away the worst, but not all, of the routine work for standard replica replacements. We believed the Ads Database was well managed and that we had harvested most of the low-hanging fruit in terms of optimization and scale. However, as daily operations became comfortable, team members began to look at the next level of system development: migrating MySQL onto Google’s cluster scheduling system, Borg. We hoped this migration would provide two main benefits: Completely eliminate machine/replica maintenance: Borg would automatically handle the setup/restart of new and broken tasks. Enable bin-packing of multiple MySQL instances on the same physical machine: Borg would enable more efficient use of machine resources via Containers. In late 2008, we successfully deployed a proof of concept MySQL instance on Borg. Unfortunately, this was accompanied by a significant new difficulty. A core operating characteristic of Borg is that its tasks move around automatically. Tasks commonly move within Borg as frequently as once or twice per week. This frequency was tolerable for our database replicas, but unacceptable for our masters. At that time, the process for master failover took 30–90 minutes per instance. Simply because we ran on shared machines and were subject to reboots for kernel upgrades, in addition to the normal rate of machine failure, we had to expect a number of otherwise unrelated failovers every week. This factor, in combination with the number of shards on which our system was hosted, meant that: Manual failovers would consume a substantial amount of human hours and would give us best-case availability of 99% uptime, which fell short of the actual business requirements of the product. In order to meet our error budgets, each failover would have to take less than 30 seconds of downtime. There was no way to optimize a human-dependent procedure to make downtime shorter than 30 seconds. Therefore, our only choice was to automate failover. Actually, we needed to automate more than just failover. In 2009 Ads SRE completed our automated failover daemon, which we dubbed “Decider.” Decider could complete MySQL failovers for both planned and unplanned failovers in less than 30 seconds 95% of the time. With the creation of Decider, MySQL on Borg (MoB) finally became a reality. We graduated from optimizing our infrastructure for a lack of failover to embracing the idea that failure is inevitable, and therefore optimizing to recover quickly through automation. While automation let us achieve highly available MySQL in a world that forced up to two restarts per week, it did come with its own set of costs. All of our applications had to be changed to include significantly more failure-handling logic than before. Given that the norm in the MySQL development world is to assume that the MySQL instance will be the most stable component in the stack, this switch meant customizing software like JDBC to be more tolerant of our failure-prone environment. However, the benefits of migrating to MoB with Decider were well worth these costs. Once on MoB, the time our team spent on mundane operational tasks dropped by 95%. Our failovers were automated, so an outage of a single database task no longer paged a human. The main upshot of this new automation was that we had a lot more free time to spend on improving other parts of the infrastructure. Such improvements had a cascading effect: the more time we saved, the more time we were able to spend on optimizing and automating other tedious work. Eventually, we were able to automate schema changes, causing the cost of total operational maintenance of the Ads Database to drop by nearly 95%. Some might say that we had successfully automated ourselves out of this job. The hardware side of our domain also saw improvement. Migrating to MoB freed up considerable resources because we could schedule multiple MySQL instances on the same machines, which improved utilization of our hardware. In total, we were able to free up about 60% of our hardware. Our team was now flush with hardware and engineering resources. This example demonstrates the wisdom of going the extra mile to deliver a platform rather than replacing existing manual procedures. The next example comes from the cluster infrastructure group, and illustrates some of the more difficult trade-offs you might encounter on your way to automating all the things. Soothing the Pain: Applying Automation to Cluster Turnups Ten years ago, the Cluster Infrastructure SRE team seemed to get a new hire every few months. As it turned out, that was approximately the same frequency at which we turned up a new cluster. Because turning up a service in a new cluster gives new hires exposure to a service’s internals, this task seemed like a natural and useful training tool. The steps taken to get a cluster ready for use were something like the following: Fit out a datacenter building for power and cooling. Install and configure core switches and connections to the backbone. Install a few initial racks of servers. Configure basic services such as DNS and installers, then configure a lock service, storage, and computing. Deploy the remaining racks of machines. Assign user-facing services resources, so their teams can set up the services. Steps 4 and 6 were extremely complex. While basic services like DNS are relatively simple, the storage and compute subsystems at that time were still in heavy development, so new flags, components, and optimizations were added weekly. Some services had more than a hundred different component subsystems, each with a complex web of dependencies. Failing to configure one subsystem, or configuring a system or component differently than other deployments, is a customer-impacting outage waiting to happen. In one case, a multi-petabyte Bigtable cluster was configured to not use the first (logging) disk on 12-disk systems, for latency reasons. A year later, some automation assumed that if a machine’s first disk wasn’t being used, that machine didn’t have any storage configured; therefore, it was safe to wipe the machine and set it up from scratch. All of the Bigtable data was wiped, instantly. Thankfully we had multiple real-time replicas of the dataset, but such surprises are unwelcome. Automation needs to be careful about relying on implicit "safety" signals. Early automation focused on accelerating cluster delivery. This approach tended to rely upon creative use of SSH for tedious package distribution and service initialization problems. This strategy was an initial win, but those free-form scripts became a cholesterol of technical debt. Detecting Inconsistencies with Prodtest As the numbers of clusters grew, some clusters required hand-tuned flags and settings. As a result, teams wasted more and more time chasing down difficult-to-spot misconfigurations. If a flag that made GFS more responsive to log processing leaked into the default templates, cells with many files could run out of memory under load. Infuriating and time-consuming misconfigurations crept in with nearly every large configuration change. The creative—though brittle—shell scripts we used to configure clusters were neither scaling to the number of people who wanted to make changes nor to the sheer number of cluster permutations that needed to be built. These shell scripts also failed to resolve more significant concerns before declaring that a service was good to take customer-facing traffic, such as: Were all of the service’s dependencies available and correctly configured? Were all configurations and packages consistent with other deployments? Could the team confirm that every configuration exception was desired? Prodtest (Production Test) was an ingenious solution to these unwelcome surprises. We extended the Python unit test framework to allow for unit testing of real-world services. These unit tests have dependencies, allowing a chain of tests, and a failure in one test would quickly abort. Take the test shown in Figure 7-1 as an example. Figure 7-1. ProdTest for DNS Service, showing how one failed test aborts the subsequent chain of tests A given team’s Prodtest was given the cluster name, and it could validate that team’s services in that cluster. Later additions allowed us to generate a graph of the unit tests and their states. This functionality allowed an engineer to see quickly if their service was correctly configured in all clusters, and if not, why. The graph highlighted the failed step, and the failing Python unit test output a more verbose error message. Any time a team encountered a delay due to another team’s unexpected misconfiguration, a bug could be filed to extend their Prodtest. This ensured that a similar problem would be discovered earlier in the future. SREs were proud to be able to assure their customers that all services—both newly turned up services and existing services with new configuration—would reliably serve production traffic. For the first time, our project managers could predict when a cluster could "go live," and had a complete understanding of why each clusters took six or more weeks to go from "network-ready" to "serving live traffic." Out of the blue, SRE received a mission from senior management: In three months, five new clusters will reach network-ready on the same day. Please turn them up in one week. Resolving Inconsistencies Idempotently A "One Week Turnup" was a terrifying mission. We had tens of thousands of lines of shell script owned by dozens of teams. We could quickly tell how unprepared any given cluster was, but fixing it meant that the dozens of teams would have to file hundreds of bugs, and then we had to hope that these bugs would be promptly fixed. We realized that evolving from "Python unit tests finding misconfigurations" to "Python code fixing misconfigurations" could enable us to fix these issues faster. The unit test already knew which cluster we were examining and the specific test that was failing, so we paired each test with a fix. If each fix was written to be idempotent, and could assume that all dependencies were met, resolving the problem should have been easy—and safe—to resolve. Requiring idempotent fixes meant teams could run their "fix script" every 15 minutes without fearing damage to the cluster’s configuration. If the DNS team’s test was blocked on the Machine Database team’s configuration of a new cluster, as soon as the cluster appeared in the database, the DNS team’s tests and fixes would start working. Take the test shown in Figure 7-2 as an example. If TestDnsMonitoringConfigExists fails, as shown, we can call FixDnsMonitoringCreateConfig , which scrapes configuration from a database, then checks a skeleton configuration file into our revision control system. Then TestDnsMonitoringConfigExists passes on retry, and the TestDnsMonitoringConfigPushed test can be attempted. If the test fails, the FixDnsMonitoringPushConfig step runs. If a fix fails multiple times, the automation assumes that the fix failed and stops, notifying the user. Armed with these scripts, a small group of engineers could ensure that we could go from "The network works, and machines are listed in the database" to "Serving 1% of websearch and ads traffic" in a matter of a week or two. At the time, this seemed to be the apex of automation technology. Looking back, this approach was deeply flawed; the latency between the test, the fix, and then a second test introduced flaky tests that sometimes worked and sometimes failed. Not all fixes were naturally idempotent, so a flaky test that was followed by a fix might render the system in an inconsistent state. Figure 7-2. ProdTest for DNS Service, showing that one failed test resulted in only running one fix The Inclination to Specialize Automation processes can vary in three respects: Competence , i.e., their accuracy Latency , how quickly all steps are executed when initiated Relevance , or proportion of real-world process covered by automation We began with a process that was highly competent (maintained and run by the service owners), high-latency (the service owners performed the process in their spare time or assigned it to new engineers), and very relevant (the service owners knew when the real world changed, and could fix the automation). To reduce turnup latency, many service owning teams instructed a single "turnup team" what automation to run. The turnup team used tickets to start each stage in the turnup so that we could track the remaining tasks, and who those tasks were assigned to. If the human interactions regarding automation modules occurred between people in the same room, cluster turnups could happen in a much shorter time. Finally, we had our competent, accurate, and timely automation process! But this state didn’t last long. The real world is chaotic: software, configuration, data, etc. changed, resulting in over a thousand separate changes a day to affected systems. The people most affected by automation bugs were no longer domain experts, so the automation became less relevant (meaning that new steps were missed) and less competent (new flags might have caused automation to fail). However, it took a while for this drop in quality to impact velocity. Automation code, like unit test code, dies when the maintaining team isn’t obsessive about keeping the code in sync with the codebase it covers. The world changes around the code: the DNS team adds new configuration options, the storage team changes their package names, and the networking team needs to support new devices. By relieving teams who ran services of the responsibility to maintain and run their automation code, we created ugly organizational incentives: A team whose primary task is to speed up the current turnup has no incentive to reduce the technical debt of the service-owning team running the service in production later. A team not running automation has no incentive to build systems that are easy to automate. A product manager whose schedule is not affected by low-quality automation will always prioritize new features over simplicity and automation. The most functional tools are usually written by those who use them. A similar argument applies to why product development teams benefit from keeping at least some operational awareness of their systems in production. Turnups were again high-latency, inaccurate, and incompetent—the worst of all worlds. However, an unrelated security mandate allowed us out of this trap. Much of distributed automation relied at that time on SSH. This is clumsy from a security perspective, because people must have root on many machines to run most commands. A growing awareness of advanced, persistent security threats drove us to reduce the privileges SREs enjoyed to the absolute minimum they needed to do their jobs. We had to replace our use of sshd with an authenticated, ACL-driven, RPC-based Local Admin Daemon, also known as Admin Servers, which had permissions to perform those local changes. As a result, no one could install or modify a server without an audit trail. Changes to the Local Admin Daemon and the Package Repo were gated on code reviews, making it very difficult for someone to exceed their authority; giving someone the access to install packages would not let them view colocated logs. The Admin Server logged the RPC requestor, any parameters, and the results of all RPCs to enhance debugging and security audits. Service-Oriented Cluster-Turnup In the next iteration, Admin Servers became part of service teams’ workflows, both as related to the machine-specific Admin Servers (for installing packages and rebooting) and cluster-level Admin Servers (for actions like draining or turning up a service). SREs moved from writing shell scripts in their home directories to building peer-reviewed RPC servers with fine-grained ACLs. Later on, after the realization that turnup processes had to be owned by the teams that owned the services fully sank in, we saw this as a way to approach cluster turnup as a Service-Oriented Architecture (SOA) problem: service owners would be responsible for creating an Admin Server to handle cluster turnup/turndown RPCs, sent by the system that knew when clusters were ready. In turn, each team would provide the contract (API) that the turnup automation needed, while still being free to change the underlying implementation. As a cluster reached "network-ready," automation sent an RPC to each Admin Server that played a part in turning up the cluster. We now have a low-latency, competent, and accurate process; most importantly, this process has stayed strong as the rate of change, the number of teams, and the number of services seem to double each year. As mentioned earlier, our evolution of turnup automation followed a path: Operator-triggered manual action (no automation) Operator-written, system-specific automation Externally maintained generic automation Internally maintained, system-specific automation Autonomous systems that need no human intervention While this evolution has, broadly speaking, been a success, the Borg case study illustrates another way we have come to think of the problem of automation. Borg: Birth of the Warehouse-Scale Computer Another way to understand the development of our attitude toward automation, and when and where that automation is best deployed, is to consider the history of the development of our cluster management systems. 31 Like MySQL on Borg, which demonstrated the success of converting manual operations to automatic ones, and the cluster turnup process, which demonstrated the downside of not thinking carefully enough about where and how automation was implemented, developing cluster management also ended up demonstrating another lesson about how automation should be done. Like our previous two examples, something quite sophisticated was created as the eventual result of continuous evolution from simpler beginnings. Google’s clusters were initially deployed much like everyone else’s small networks of the time: racks of machines with specific purposes and heterogeneous configurations. Engineers would log in to some well-known “master” machine to perform administrative tasks; “golden” binaries and configuration lived on these masters. As we had only one colo provider, most naming logic implicitly assumed that location. As production grew, and we began to use multiple clusters, different domains (cluster names) entered the picture. It became necessary to have a file describing what each machine did, which grouped machines under some loose naming strategy. This descriptor file, in combination with the equivalent of a parallel SSH, allowed us to reboot (for example) all the search machines in one go. Around this time, it was common to get tickets like “search is done with machine x1 , crawl can have the machine now.” Automation development began. Initially automation consisted of simple Python scripts for operations such as the following: Service management: keeping services running (e.g., restarts after segfaults) Tracking what services were supposed to run on which machines Log message parsing: SSHing into each machine and looking for regexps Automation eventually mutated into a proper database that tracked machine state, and also incorporated more sophisticated monitoring tools. With the union set of the automation available, we could now automatically manage much of the lifecycle of machines: noticing when machines were broken, removing the services, sending them to repair, and restoring the configuration when they came back from repair. But to take a step back, this automation was useful yet profoundly limited, due to the fact that abstractions of the system were relentlessly tied to physical machines. We needed a new approach, hence Borg [Ver15] was born: a system that moved away from the relatively static host/port/job assignments of the previous world, toward treating a collection of machines as a managed sea of resources. Central to its success—and its conception—was the notion of turning cluster management into an entity for which API calls could be issued, to some central coordinator. This liberated extra dimensions of efficiency, flexibility, and reliability: unlike the previous model of machine “ownership,” Borg could allow machines to schedule, for example, batch and user-facing tasks on the same machine. This functionality ultimately resulted in continuous and automatic operating system upgrades with a very small amount of constant 32 effort—effort that does not scale with the total size of production deployments. Slight deviations in machine state are now automatically fixed; brokenness and lifecycle management are essentially no-ops for SRE at this point. Thousands of machines are born, die, and go into repairs daily with no SRE effort. To echo the words of Ben Treynor Sloss: by taking the approach that this was a software problem, the initial automation bought us enough time to turn cluster management into something autonomous, as opposed to automated. We achieved this goal by bringing ideas related to data distribution, APIs, hub-and-spoke architectures, and classic distributed system software development to bear upon the domain of infrastructure management. An interesting analogy is possible here: we can make a direct mapping between the single machine case and the development of cluster management abstractions. In this view, rescheduling on another machine looks a lot like a process moving from one CPU to another: of course, those compute resources happen to be at the other end of a network link, but to what extent does that actually matter? Thinking in these terms, rescheduling looks like an intrinsic feature of the system rather than something one would “automate”—humans couldn’t react fast enough anyway. Similarly in the case of cluster turnup: in this metaphor, cluster turnup is simply additional schedulable capacity, a bit like adding disk or RAM to a single computer. However, a single-node computer is not, in general, expected to continue operating when a large number of components fail. The global computer is—it must be self-repairing to operate once it grows past a certain size, due to the essentially statistically guaranteed large number of failures taking place every second. This implies that as we move systems up the hierarchy from manually triggered, to automatically triggered, to autonomous, some capacity for self-introspection is necessary to survive. Reliability Is the Fundamental Feature Of course, for effective troubleshooting, the details of internal operation that the introspection relies upon should also be exposed to the humans managing the overall system. Analogous discussions about the impact of automation in the noncomputer domain—for example, in airplane flight 33 or industrial applications—often point out the downside of highly effective automation: 34 human operators are progressively more relieved of useful direct contact with the system as the automation covers more and more daily activities over time. Inevitably, then, a situation arises in which the automation fails, and the humans are now unable to successfully operate the system. The fluidity of their reactions has been lost due to lack of practice, and their mental models of what the system should be doing no longer reflect the reality of what it is doing. 35 This situation arises more when the system is nonautonomous—i.e., where automation replaces manual actions, and the manual actions are presumed to be always performable and available just as they were before. Sadly, over time, this ultimately becomes false: those manual actions are not always performable because the functionality to permit them no longer exists. We, too, have experienced situations where automation has been actively harmful on a number of occasions—see Automation: Enabling Failure at Scale —but in Google’s experience, there are more systems for which automation or autonomous behavior are no longer optional extras. As you scale, this is of course the case, but there are still strong arguments for more autonomous behavior of systems irrespective of size. Reliability is the fundamental feature, and autonomous, resilient behavior is one useful way to get that. Recommendations You might read the examples in this chapter and decide that you need to be Google-scale before you have anything to do with automation whatsoever. This is untrue, for two reasons: automation provides more than just time saving, so it’s worth implementing in more cases than a simple time-expended versus time-saved calculation might suggest. But the approach with the highest leverage actually occurs in the design phase: shipping and iterating rapidly might allow you to implement functionality faster, yet rarely makes for a resilient system. Autonomous operation is difficult to convincingly retrofit to sufficiently large systems, but standard good practices in software engineering will help considerably: having decoupled subsystems, introducing APIs, minimizing side effects, and so on. Automation: Enabling Failure at Scale Google runs over a dozen of its own large datacenters, but we also depend on machines in many third-party colocation facilities (or "colos"). Our machines in these colos are used to terminate most incoming connections, or as a cache for our own Content Delivery Network, in order to lower end-user latency. At any point in time, a number of these racks are being installed or decommissioned; both of these processes are largely automated. One step during decommission involves overwriting the full content of the disk of all the machines in the rack, after which point an independent system verifies the successful erase. We call this process "Diskerase." Once upon a time, the automation in charge of decommissioning a particular rack failed, but only after the Diskerase step had completed successfully. Later, the decommission process was restarted from the beginning, to debug the failure. On that iteration, when trying to send the set of machines in the rack to Diskerase, the automation determined that the set of machines that still needed to be Diskerased was (correctly) empty. Unfortunately, the empty set was used as a special value, interpreted to mean "everything." This means the automation sent almost all the machines we have in all colos to Diskerase. Within minutes, the highly efficient Diskerase wiped the disks on all machines in our CDN, and the machines were no longer able to terminate connections from users (or do anything else useful). We were still able to serve all the users from our own datacenters, and after a few minutes the only effect visible externally was a slight increase in latency. As far as we could tell, very few users noticed the problem at all, thanks to good capacity planning (at least we got that right!). Meanwhile, we spent the better part of two days reinstalling the machines in the affected colo racks; then we spent the following weeks auditing and adding more sanity checks—including rate limiting—into our automation, and making our decommission workflow idempotent. 26 For readers who already feel they precisely understand the value of automation, skip ahead to The Value for Google SRE . However, note that our description contains some nuances that might be useful to keep in mind while reading the rest of the chapter. 27 The expertise acquired in building such automation is also valuable in itself; engineers both deeply understand the existing processes they have automated and can later automate novel processes more quickly. 28 See the following XKCD cartoon: https://xkcd.com/1205/ . 29 See, for example, https://blog.engineyard.com/2014/pets-vs-cattle . 30 Of course, not every system that needs to be managed actually provides callable APIs for management—forcing some tooling to use, e.g., CLI invocations or automated website clicks. 31 We have compressed and simplified this history to aid understanding. 32 As in a small, unchanging number. 33 See, e.g., https://en.wikipedia.org/wiki/Air_France_Flight_447 . 34 See, e.g., [Bai83] and [Sar97] . 35 This is yet another good reason for regular practice drills; see Disaster Role Playing . Previous Chapter 6 - Monitoring Distributed Systems Next Chapter 8 - Release Engineering Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-google
Usage Statistics and Market Share of Google as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Technology Changes Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > Google Usage statistics of Google as DNS server provider Request an extensive Google market report. Learn more These diagrams show the usage statistics of Google as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. Google is used as DNS server provider by 2.4% of all the websites. Historical trend This diagram shows the historical trend in the percentage of websites using Google. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of Google compared to all other DNS server providers in our Google market report . Market position This diagram shows the market position of Google in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using Google Google.com Youtube.com Spotify.com T.me used on inner pages Telegram.org Google.com.hk Lemonde.fr Infobae.com Abc.es Kleinanzeigen.de Random selection of sites using Google Therapexa.com Midmodcolorado.com Pimp-it.store Syrashop.com Univertshop.com Sites using Google only recently 24h.com.vn Sharechat.com Mpl.live Eva.vn Usmagazine.com More examples of sites You can find more examples of sites using Google in our Google market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of Google with SiteGround and o2switch and Bigcommerce . Free technology usage monitoring service Get a notification when a top site starts using Google. Share this page Technology Brief Google Category: DNS Server Providers Google provides various services to run on its servers. Website: cloud.google.com advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/overview/content_delivery
Usage Statistics and Market Share of JavaScript Content Delivery Networks for Websites, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Usage History Market Share History Market Top Site Usage Market Position Performance Page Speed Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > Content Delivery Usage statistics and market shares of JavaScript content delivery networks Request an extensive JavaScript content delivery networks market report . Learn more This diagram shows the percentages of websites using various JavaScript content delivery networks. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. How to read the diagram: 75.2% of the websites use none of the JavaScript content delivery networks that we monitor. CDNJS is used by 12.0% of all the websites, that is a JavaScript content delivery network market share of 48.6%. None 75.2% CDNJS 12.0% 48.6% jsDelivr 8.8% 35.5% Google Hosted Libraries 8.3% 33.5% jQuery CDN 4.3% 17.3% unpkg 2.4% 9.9% Yandex Libraries Hosting 0.4% 1.6% Microsoft Ajax CDN 0.2% 0.9% BootCDN less than 0.1% 0.1% StaticFile less than 0.1% 0.1% Baidu Resource Library less than 0.1% 0.1% W3Techs.com, 13 January 2026 absolute usage percentage market share Percentages of websites using various JavaScript content delivery networks Note: a website may use more than one JavaScript content delivery network The following JavaScript content delivery networks have a market share of less than 0.1% Statically Yahoo API CDN ArvanCloud Is there a technology missing? Registered users can make a proposal to add a technology. Do you want to stay informed about this survey? Use our monthly technology survey RSS Feed . Registered users can also subscribe to a monthly technology survey email. Share this page Technology Brief JavaScript Content Delivery Networks Content Delivery Networks for serving JavaScript libraries. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more No related forum entry yet content delivery forum advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/overview/tag_manager
Usage Statistics and Market Share of Tag Managers for Websites, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Usage History Market Share History Market Top Site Usage Market Position Performance Page Speed Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > Tag Managers Usage statistics and market shares of tag managers for websites Request an extensive tag managers market report . Learn more This diagram shows the percentages of websites using various tag managers. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. How to read the diagram: 52.7% of the websites use none of the tag managers that we monitor. Google Tag Manager is used by 47.1% of all the websites, that is a tag manager market share of 99.7%. None 52.7% Google Tag Manager 47.1% 99.7% Adobe DTM 0.2% 0.4% Tealium 0.1% 0.2% Matomo Tag Manager 0.1% 0.2% Yahoo Tag Manager less than 0.1% 0.1% W3Techs.com, 13 January 2026 absolute usage percentage market share Percentages of websites using various tag managers Note: a website may use more than one tag manager The following tag managers have a market share of less than 0.1% Ensighten Commanders Act Is there a technology missing? Registered users can make a proposal to add a technology. Do you want to stay informed about this survey? Use our monthly technology survey RSS Feed . Registered users can also subscribe to a monthly technology survey email. Share this page Technology Brief Tag Managers Tag management systems support webmasters in managing snippets of code (referred to as "tags") on their websites. The managed code is mostly related to third-party services such as advertising or traffic analysis services. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more No related forum entry yet tag managers forum advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-siteground
Usage Statistics and Market Share of SiteGround as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Technology Changes Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > SiteGround Usage statistics of SiteGround as DNS server provider Request an extensive SiteGround market report. Learn more These diagrams show the usage statistics of SiteGround as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. SiteGround is used as DNS server provider by 1.5% of all the websites. Historical trend This diagram shows the historical trend in the percentage of websites using SiteGround. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of SiteGround compared to all other DNS server providers in our SiteGround market report . Market position This diagram shows the market position of SiteGround in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using SiteGround Survimo.com Pcre.org Bandlab.com used on subdomain Youradchoices.com Ultimasnoticias.com.ve Unblast.com Demoslotsfun.com Ragno.com Centredevils.co.uk Nutrifactor.com.pk Random selection of sites using SiteGround Kellylundbergofficial.com Wannabe-toys.com Sunshinewatersportsofpc.com Stottiehinckley.co.uk 33rulebook.com Sites using SiteGround only recently Milvus.com.br used on subdomain Earthtimes.org Gadstyle.com Investingnote.com used on subdomain Chanhassendt.com More examples of sites You can find more examples of sites using SiteGround in our SiteGround market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of SiteGround with GoDaddy Group and IONOS and Cloudflare . Free technology usage monitoring service Get a notification when a top site starts using SiteGround. Share this page Technology Brief SiteGround Category: DNS Server Providers SiteGround is an internet services provider headquartered in Bulgaria. Website: siteground.com advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://goo.gle/sre-classroom
Google SRE - Sre wokshop | Learn about NALSD and sre Site Reliability Engineering Jump to Content Home Resources Latest resources Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Distributed PubSub Distributed Image Server The Art of SLO Latest resources Resources overview Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Books overview Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa Mobaa overview 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Classroom overview Distributed PubSub Distributed Image Server The Art of SLO Books Careers Cloud Local Prodcast Spotlight Site Reliability Engineering Jump to Content SRE Classroom: Learning about NALSD and SRE Introduction SRE Classroom is a collection of workshops developed by Google's Site Reliability Engineering group. The goals of this workshop are to (1) introduce participants to the principles of non-abstract large systems design ( NALSD ), and (2) provide hands-on experiences with applying these principles to the design and evaluation of these systems. We consider NALSD a concept fundamental to SRE, and understanding its principles provides a basis for having meaningful conversations about the design and operation of large software systems. Tutorials Distributed PubSub Build a planet scale distributed PubSub system using NALSD principles. Learn about some foundational large system design principles and concepts. Topics include correctness, reliability, performance, different inter-system communication styles, and more. We introduce the problem requirements in detail and walk through an example solution. Distributed ImageServer Build a planet scale distributed ImageServer system using NALSD principles. Learn about some foundational large system design principles and concepts. Topics include sharding, replication, latency, load balancing, and more. We introduce the problem requirements in detail and walk through an example solution. The Art of SLOs The Art of SLOs introduces participants to concepts in measuring service reliability: Service Level Indicators (SLIs) and Service Level Objectives (SLOs), and gives them some hands-on experience with creating these measures in practice. Additional Resources Supplementary Materials This section collects material that you can use to continue your study of Non Abstract Large System Design. You can use this material independently of the tutorial material. NALSD Flash Cards – A4 NALSD Flash Cards – Letter NALSD chapter in the SRE Workbook If you find this useful, tell us which topics you want to see in future exercises. Please use the issue tracker to send us your thoughts and suggestions. Alternatively, send us a tweet at @googlesre. Licensing These materials are released under the Creative Commons CC-BY-4.0 license for anyone to use and reuse, as long as Google is credited as the original author. If you want to suggest improvements, have any problems with the content, or just want to ask a question, please create a bug in our issue tracker component . Follow us About Google Google products Privacy Terms Help
2026-01-13T09:29:20
https://w3techs.com/feedback
Feedback advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here W3Techs - Feedback If you have any questions, suggestions or remarks to make about this site, please use the forum or contact us via email . About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-groupgodaddy
Usage Statistics and Market Share of GoDaddy Group as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Subcategories History Technology Changes Market Top Site Usage Market Position Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > GoDaddy Group Usage statistics of GoDaddy Group as DNS server provider Request an extensive GoDaddy Group market report. Learn more These diagrams show the usage statistics of GoDaddy Group as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. GoDaddy Group is used as DNS server provider by 10.1% of all the websites. Subcategories of GoDaddy Group This diagram shows the percentages of websites using various subcategories of GoDaddy Group. How to read the diagram: GoDaddy is used by 99.4% of all the websites who use GoDaddy Group GoDaddy 99.4% Host Europe 0.7% Domainfactory less than 0.1% 123-reg less than 0.1% W3Techs.com, 13 January 2026 Percentages of websites using various subcategories of GoDaddy Group Note: a website may use more than one subcategory of GoDaddy Group Historical trend This diagram shows the historical trend in the percentage of websites using GoDaddy Group. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of GoDaddy Group compared to all other DNS server providers in our GoDaddy Group market report . Market position This diagram shows the market position of GoDaddy Group in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using GoDaddy Group Calculator.net Godaddy.com Mailinabox.email Uk.com Boardgamegeek.com Fr.de Menards.com Thriftbooks.com Jigsawplanet.com Chess-results.com Random selection of sites using GoDaddy Group Weareholy.com Chiropodist-liverpool.co.uk Wattleandflame.com.au Birchcoffee.com Hillsboroughdefense.com Sites using GoDaddy Group only recently Safwabank.com Easyindexportal.de Cpc.com.eg Goveer.com Mistnews.com More examples of sites You can find more examples of sites using GoDaddy Group in our GoDaddy Group market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of GoDaddy Group with SiteGround and Aruba Group and Webempresa . Free technology usage monitoring service Get a notification when a top site starts using GoDaddy Group. Share this page Technology Brief GoDaddy Group Category: DNS Server Providers GoDaddy is an internet and IT services provider. Website: godaddy.com Latest related posting   read all Web Technology Trends - June 2025 5 June 2025 We present some of the highlights of our surveys in the various categories that might not be obvious from browsing through our pages. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-groupendurance
Usage Statistics and Market Share of Newfold Digital Group as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Subcategories History Technology Changes Market Top Site Usage Market Position Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > Newfold Digital Group Usage statistics of Newfold Digital Group as DNS server provider Request an extensive Newfold Digital Group market report. Learn more These diagrams show the usage statistics of Newfold Digital Group as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. Newfold Digital Group is used as DNS server provider by 4.0% of all the websites. Subcategories of Newfold Digital Group This diagram shows the percentages of websites using various subcategories of Newfold Digital Group. How to read the diagram: HostGator is used by 25.6% of all the websites who use Newfold Digital Group HostGator 25.6% Bluehost 24.7% Network Solutions 15.6% Register.com 7.1% PublicDomainRegistry 5.4% Dreamscape Networks 5.2% ResellerClub 5.1% Web.com 3.0% BigRock 2.6% Newfold Digital 2.5% Domain.com 1.4% MarkMonitor 0.9% Digital Pacific 0.5% Panthur 0.3% Crucial 0.2% Web24 0.1% W3Techs.com, 13 January 2026 Percentages of websites using various subcategories of Newfold Digital Group Note: a website may use more than one subcategory of Newfold Digital Group Historical trend This diagram shows the historical trend in the percentage of websites using Newfold Digital Group. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of Newfold Digital Group compared to all other DNS server providers in our Newfold Digital Group market report . Market position This diagram shows the market position of Newfold Digital Group in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using Newfold Digital Group Tjk.org Eltiempo.com Tvspielfilm.de Onward.co.jp Missyusa.com Garnstudio.com Riverisland.com Uppcl.org Indriya.com Us.com Random selection of sites using Newfold Digital Group Partesricoh.com Anchaldelightproduct.in Nissan.ph Vitakin.com Rajasthanplots.com Sites using Newfold Digital Group only recently Tcmsystem.net Treehousefoods.com Robinboutique.com Freewestmedia.com Iec-glc.gov.gh More examples of sites You can find more examples of sites using Newfold Digital Group in our Newfold Digital Group market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of Newfold Digital Group with Automattic and United Internet and SiteGround . Free technology usage monitoring service Get a notification when a top site starts using Newfold Digital Group. Share this page Technology Brief Newfold Digital Group Category: DNS Server Providers Newfold Digital (formerly Endurance International Group (EIG) and Web.com) provides internet services under various brands. Website: newfold.com advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/topsite/dns_server
Top Site Usage Statistics of DNS Server Providers advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Technologies > DNS Servers > Top Site Usage DNS server providers ranked by usage on top websites Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the share of top sites among all the sites for various DNS server providers. We calculate a top-site score from the usage of a technology by top sites compared to its usage by average sites. The score is derived from measurements at several traffic levels, but roughly speaking, score 2 means that usage of the technology is twice as common among the top 1,000 sites than among all sites. Score -3 means that usage by all sites is three times as common as by the top 1,000 sites. Only technologies used by 10 sites or more are included in this report. See technologies overview for further explanations on the methodologies used in the surveys. How to read the diagram: Usage of FPT among top 1,000 sites is 469 times as common than among all sites. FPT 469 Cisco 98.5 Azion 60.9 W3Techs.com, 13 January 2026 Top-site scores for DNS server providers More detailed statistics You can find top site usage data for all 726 DNS server providers in our DNS server providers market report . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-groupone
Usage Statistics and Market Share of Group.one as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Subcategories History Technology Changes Market Top Site Usage Market Position Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > Group.one Usage statistics of Group.one as DNS server provider Request an extensive Group.one market report. Learn more These diagrams show the usage statistics of Group.one as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. Group.one is used as DNS server provider by 1.2% of all the websites. Subcategories of Group.one This diagram shows the percentages of websites using various subcategories of Group.one. How to read the diagram: One.com is used by 46.2% of all the websites who use Group.one One.com 46.2% Webglobe 13.6% Hostnet 8.1% Alfahosting 6.9% Dogado 6.6% Metanet 6.1% Easyname 3.5% Profihost 2.4% Checkdomain 2.1% Zoner Oy 1.9% Uniweb 1.8% Antagonist 0.7% W3Techs.com, 13 January 2026 Percentages of websites using various subcategories of Group.one Note: a website may use more than one subcategory of Group.one Historical trend This diagram shows the historical trend in the percentage of websites using Group.one. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of Group.one compared to all other DNS server providers in our Group.one market report . Market position This diagram shows the market position of Group.one in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using Group.one Idos.cz Lectio.dk One.com Pravda.sk Jatkoaika.com Sportisimo.cz Slevomat.cz Martinus.sk Zeeman.com Online2pdf.com Random selection of sites using Group.one Nextgenpayment.eu Codeconnect.group Detsky-lekar-ruzickova.cz Healthyheads.nl Pascalterheege.nl Sites using Group.one only recently Hestesportcenteret.no Ulabute.com Reatch.ch Smaktilbehag.no More examples of sites You can find more examples of sites using Group.one in our Group.one market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of Group.one with SiteGround and Hostnet and Bigcommerce . Free technology usage monitoring service Get a notification when a top site starts using Group.one. Share this page Technology Brief Group.one Category: DNS Server Providers Group.one is a group of IT and internet services providers, headquartered in Sweden. Website: group.one advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/disclaimer
Disclaimer advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here W3Techs - Disclaimer W3Techs collects and processes information about the usage of web technologies by applying a number of techniques. This information may be incomplete and inaccurate due to the vastness and complexity of the matter in hand. These are some of the reasons why: In order to obtain any information from websites, we rely on the websites themselves, their owners or their webmasters to provide such information. Some websites are more open to sharing this type of information than others. Some technologies may provide more means to reveal information about their usage than others. In some cases, the information provided by websites, their owners or their webmasters may be wrong. We may not be able to detect all cases of misinformation. In some cases, we use heuristics to further interpret the data we collect, in order to better determine the usage of certain technologies. These heuristics may occasionally lead to wrong conclusions. Some technologies may become visible only after user interaction, e.g. after entering a search term in an online form or after user login. Those technologies may remain undetected, as our analyzer does not attempt to mimic any user interaction, which would often contravene the "terms of use" of the websites. Some technologies may become visible only after executing programs that are downloaded from a website, e.g. JavaScript code. Those technologies may remain undetected. We may not detect technologies if they are used only on some pages of a websites, as we do not analyze each page of a website. Our research does not cover all websites, but a significant sample of sites. Although, we do make a considerable effort to include all major technologies of the categories that we report, our research does not cover all technologies. Although, we do make a considerable effort to keep our data up-to-date, we may not be able to detect changes of technologies instantaneously. Therefore, some of our data may temporarily represent an earlier status of websites. Although, we do apply test processes, the software we use to collect data and to produce the surveys may occasionally have defects, and thus produce inaccurate results. About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-totalwebhostingsolutions
Usage Statistics and Market Share of Your.Online as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Subcategories History Technology Changes Market Top Site Usage Market Position Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > Your.Online Usage statistics of Your.Online as DNS server provider Request an extensive Your.Online market report. Learn more These diagrams show the usage statistics of Your.Online as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. Your.Online is used as DNS server provider by 1.0% of all the websites. Subcategories of Your.Online This diagram shows the percentages of websites using various subcategories of Your.Online. How to read the diagram: Gandi is used by 41.2% of all the websites who use Your.Online Gandi 41.2% o2switch 23.2% Yourhosting 6.5% Loading 4.4% Versio 4.3% Pair Networks 3.6% Argeweb 3.6% 1blu 2.1% Flexwebhosting 2.0% Heart Internet 1.9% Inleed 1.7% Manitu 1.3% Shock Media 1.2% Hostinet 0.8% Axarnet 0.8% Savvii 0.6% Nexylan 0.4% LinQhost 0.3% okITup 0.2% W3Techs.com, 13 January 2026 Percentages of websites using various subcategories of Your.Online Note: a website may use more than one subcategory of Your.Online Historical trend This diagram shows the historical trend in the percentage of websites using Your.Online. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of Your.Online compared to all other DNS server providers in our Your.Online market report . Market position This diagram shows the market position of Your.Online in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using Your.Online Ouest-france.fr Dafont.com Elephantbet.co.mz Ladepeche.fr Geny.com Letelegramme.fr Midilibre.fr Lanouvellerepublique.fr But.fr Lindependant.fr Random selection of sites using Your.Online Vedruna.eu Barefootdelft.nl Monkeyjetski.com Dmr-electronics.com Pompendiscounter.be Sites using Your.Online only recently Raja.fr Hydrogenaudio.org Zombiesrungame.com Fondation-louisroederer.com Huisdiergedenken.nl More examples of sites You can find more examples of sites using Your.Online in our Your.Online market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of Your.Online with Newfold Digital Group and HostPapa and Hosterion . Free technology usage monitoring service Get a notification when a top site starts using Your.Online. Share this page Technology Brief Your.Online Category: DNS Server Providers Your.Online (formerly Total Webhosting Solutions) is a group of web hosting companies, headquartered in the Netherlands. Website: your.online advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-unitedinternet
Usage Statistics and Market Share of United Internet as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Subcategories History Technology Changes Market Top Site Usage Market Position Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > United Internet Usage statistics of United Internet as DNS server provider Request an extensive United Internet market report. Learn more These diagrams show the usage statistics of United Internet as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. United Internet is used as DNS server provider by 3.8% of all the websites. Subcategories of United Internet This diagram shows the percentages of websites using various subcategories of United Internet. How to read the diagram: IONOS is used by 54.7% of all the websites who use United Internet IONOS 54.7% Strato 18.5% home.pl 6.2% InterNetX 5.1% Fasthosts 4.0% Arsys 3.5% United Domains 2.9% World4You 2.6% SchlundTech 1.5% Piensa Solutions 0.6% we22 0.1% 1&1 Versatel 0.1% Web.de less than 0.1% Cronon less than 0.1% Sedo less than 0.1% W3Techs.com, 13 January 2026 Percentages of websites using various subcategories of United Internet Note: a website may use more than one subcategory of United Internet Historical trend This diagram shows the historical trend in the percentage of websites using United Internet. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of United Internet compared to all other DNS server providers in our United Internet market report . Market position This diagram shows the market position of United Internet in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using United Internet Web.de Gmx.net Besoccer.com Abendblatt.de Chefkoch.de Morgenpost.de Markt.de Leckerschmecker.me Ernstings-family.de Derwesten.de Random selection of sites using United Internet Bcs365.co.uk Fitness-experts.de Nimex.se Psicologomajadahonda.net Gerstengras-natur.de Sites using United Internet only recently Eventim.hu Instax.eu Empiremedals.com Bilder.de Elypso.de More examples of sites You can find more examples of sites using United Internet in our United Internet market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of United Internet with Hostinger and Beget and Newfold Digital Group . Free technology usage monitoring service Get a notification when a top site starts using United Internet. Share this page Technology Brief United Internet Category: DNS Server Providers United Internet is a German internet services company. Website: united-internet.de Latest related posting   read all Web Technologies of the Year 2023 2 January 2024 We compiled the list of web technologies that saw the largest increase in usage in 2023. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/history_overview/dns_server
Historical trends in the usage statistics of dns server providers, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Time Frame Monthly Quarterly Yearly Report Complete Historical Trends Report Technologies > DNS Servers > Historical Trend > Monthly Historical trends in the usage statistics of dns server providers This report shows the historical trends in the usage of the top DNS server providers since January 2025. 2025 1 Jan 2025 1 Feb 2025 1 Mar 2025 1 Apr 2025 1 May 2025 1 Jun 2025 1 Jul 2025 1 Aug 2025 1 Sep 2025 1 Oct 2025 1 Nov 2025 1 Dec 2026 1 Jan 2026 13 Jan Cloudflare 14.3% 14.4% 14.5% 14.4% 14.4% 14.4% 14.5% 14.5% 14.6% 14.7% 14.9% 15.1% 15.3% 15.4% GoDaddy Group 10.3% 10.3% 10.3% 10.3% 10.3% 10.2% 10.2% 10.2% 10.1% 10.1% 10.1% 10.1% 10.1% 10.1% Newfold Digital Group 4.5% 4.5% 4.4% 4.4% 4.3% 4.3% 4.2% 4.2% 4.1% 4.1% 4.1% 4.0% 4.0% 4.0% Hostinger 2.9% 3.0% 3.1% 3.2% 3.3% 3.4% 3.5% 3.6% 3.7% 3.8% 3.9% 3.9% 4.0% 4.0% United Internet 3.6% 3.6% 3.7% 3.7% 3.7% 3.7% 3.7% 3.7% 3.7% 3.7% 3.7% 3.7% 3.8% 3.8% Wix 2.9% 3.0% 3.1% 3.2% 3.3% 3.4% 3.4% 3.5% 3.5% 3.6% 3.6% 3.7% 3.7% 3.7% Amazon 4.3% 4.2% 4.0% 3.9% 3.8% 3.8% 3.8% 3.7% 3.7% 3.7% 3.6% 3.7% 3.8% 3.7% team.blue 2.9% 2.9% 2.9% 2.9% 2.9% 2.9% 2.9% 3.1% 3.0% 3.0% 3.0% 3.0% 3.0% 3.0% Google 2.3% 2.3% 2.3% 2.3% 2.3% 2.3% 2.3% 2.3% 2.3% 2.4% 2.4% 2.4% 2.4% 2.4% OVH 2.0% 2.0% 2.0% 2.0% 2.0% 1.9% 1.9% 1.9% 1.9% 1.9% 1.9% 1.9% 1.9% 1.9% GMO Internet Group 1.8% 1.8% 1.8% 1.8% 1.8% 1.8% 1.8% 1.8% 1.8% 1.8% 1.8% 1.8% 1.8% 1.8% Namecheap 1.5% 1.5% 1.6% 1.6% 1.6% 1.6% 1.6% 1.6% 1.6% 1.6% 1.6% 1.7% 1.7% 1.7% The diagram shows only DNS server providers with more than 1% usage. Find more details in our extensive DNS server providers market reports. Learn more Share this page About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/overview/character_encoding
Usage Statistics and Market Share of Character Encodings for Websites, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends History Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > Character Encodings Usage statistics of character encodings for websites This diagram shows the percentages of websites using various character encodings. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. How to read the diagram: UTF-8 is used by 98.9% of all the websites whose character encoding we know. UTF-8 98.9% ISO-8859-1 1.0% Windows-1252 0.3% Windows-1251 0.2% EUC-JP 0.1% EUC-KR 0.1% Shift JIS 0.1% W3Techs.com, 13 January 2026 Percentages of websites using various character encodings Note: a website may use more than one character encoding The following character encodings are used by less than 0.1% of the websites GB2312 Windows-1250 ISO-8859-2 Big5 ISO-8859-15 ISO-8859-9 US-ASCII GBK Windows-1254 Windows-874 Windows-1256 Windows-1255 TIS-620 ISO-8859-7 Windows-1253 UTF-16 GB18030 Windows-1257 KOI8-R ISO-8859-4 KS C 5601 ISO-2022-JP UTF-7 ISO-8859-8 ISO-8859-5 ISO-8859-6 Windows-31J KOI8-U Windows-1258 ISO-8859-16 ANSI_X3.110-1983 ISO-8859-13 ISO-8859-3 Big5 HKSCS ISO-8859-10 ISO-8859-14 Windows-949 ISO-8859-11 IBM850 Is there a technology missing? Registered users can make a proposal to add a technology. Do you want to stay informed about this survey? Use our monthly technology survey RSS Feed . Registered users can also subscribe to a monthly technology survey email. Share this page Technology Brief Character Encodings A character encoding system assigns a computer-internal representation (e.g. a number) to every character of an alphabet. No related forum entry yet character encodings forum advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-cloudflare
Usage Statistics and Market Share of Cloudflare as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Technology Changes Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > Cloudflare Usage statistics of Cloudflare as DNS server provider Request an extensive Cloudflare market report. Learn more These diagrams show the usage statistics of Cloudflare as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. Cloudflare is used as DNS server provider by 15.4% of all the websites. Historical trend This diagram shows the historical trend in the percentage of websites using Cloudflare. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of Cloudflare compared to all other DNS server providers in our Cloudflare market report . Market position This diagram shows the market position of Cloudflare in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using Cloudflare Cloudflare.com Archive.org Shopify.com Discord.com Chatgpt.com Hubspot.com Independent.co.uk Pixabay.com Mediafire.com People.com Random selection of sites using Cloudflare Bnsach.com Restaurant-luchon.com Deerfields.com Queropassagem.com.br Magnumtogelapi.com Sites using Cloudflare only recently Lefigaro.fr used on subdomain Classlink.com Screener.in Newsweek.com used on subdomain Soundcloud.com used on subdomain More examples of sites You can find more examples of sites using Cloudflare in our Cloudflare market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of Cloudflare with GoDaddy and Tilda and SiteGround . Free technology usage monitoring service Get a notification when a top site starts using Cloudflare. Share this page Technology Brief Cloudflare Category: DNS Server Providers Cloudflare provides DNS servers and other web services. Website: cloudflare.com/... Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://goo.gle/sre-20-ll
Google SRE lessons - key principles of site reliability engineering Site Reliability Engineering Jump to Content Home Resources Latest resources Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Distributed PubSub Distributed Image Server The Art of SLO Latest resources Resources overview Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Books overview Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa Mobaa overview 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Classroom overview Distributed PubSub Distributed Image Server The Art of SLO Books Careers Cloud Local Prodcast Spotlight Site Reliability Engineering Jump to Content Lessons Learned from Twenty Years of Site Reliability Engineering Or, Eleven things we have learned as Site Reliability Engineers at Google Authors Adrienne Walcer, Kavita Guliani, Mikel Ward, Sunny Hsiao, and Vrai Stacey Contributors Ali Biber, Guy Nadler, Luisa Fearnside, Thomas Holdschick, and Trevor Mattson-Hamilton Foreword A lot can happen in twenty years, especially when you're busy growing. Two decades ago, Google had a pair of small datacenters, each housing a few thousand servers, connected in a ring by a pair of 2.4G network links. We ran our private cloud (though we didn't call it that at the time) using Python scripts such as "Assigner" and "Autoreplacer" and "Babysitter" which operated on config files full of individual server names. We had a small database of the machines (MDB) which helped keep information about individual servers organized and durable. Our small team of engineers used scripts and configs to solve some common problems automatically, and to reduce the manual labor required to manage our little fleet of servers. Time passed, Google's users came for the search and stayed for the free GB of Gmail, and our fleet and network grew with it. Today, in terms of computing power, we are over 1,000 times as large as we were 20 years ago; in network, over 10,000 times as large, and we spend far less effort per server than we used to while enjoying much better reliability from our service stack. Our tools have evolved from a collection of Python scripts, to integrated ecosystems of services, to a unified platform which offers reliability by default. And our understanding of the problems and failure modes of distributed systems also evolved, as we experienced new classes of outages. We created the Wheel of Misfortune , we wrote Service Best Practices guides , we published Google's Greatest Hits, and today are horrified delighted to present: — Benjamin Treynor Sloss, Creator of Google SRE Lessons learned from two decades of Site Reliability Engineering Let's start back in 2016, when YouTube was offering your favorite videos such as "Carpool Karaoke with Adele" and the ever-catchy "Pen-Pineapple-Apple-Pen." YouTube experienced a fifteen-minute global outage, due to a bug in YouTube's distributed memory caching system, disrupting YouTube's ability to serve videos. Here are three lessons we learned from this incident. 1 The riskiness of a mitigation should scale with the severity of the outage There's a meme where one person posts a picture of a spider seen in their house, and the captain says, "TIME 2 MOVE 2 A NEW HOUSE!". The joke is that the incident (seeing a scary spider) would be responded to with a severe mitigation (abandon your current home and move to a new one). We, here in SRE, have had some interesting experiences in choosing a mitigation with more risks than the outage it's meant to resolve. During the aforementioned YouTube outage, a risky load-shedding process didn't fix the outage... it instead created a cascading failure. We learned the hard way that during an incident, we should monitor and evaluate the severity of the situation and choose a mitigation path whose riskiness is appropriate for that severity. In a best case scenario, a risky mitigation resolves an outage. In a worst case scenario, the risky mitigation misfires and the outage is prolonged by something that was intended to fix it. Additionally, if everything is broken, you can make an informed decision to bypass standard procedures. 2 Recovery mechanisms should be fully tested before an emergency An emergency fire evacuation in a tall city building is a terrible opportunity to use a ladder for the first time. Similarly, an outage is a terrible opportunity to try a risky load-shedding process for the first time. To keep your cool during a high-risk and high-stress situation, it's important to practice recovery mechanisms and mitigations beforehand and verify that: they'll do what you need them to do you know how to do them Testing recovery mechanisms has a fun side effect of reducing the risk of performing some of these actions. Since this messy outage, we've doubled down on testing. 3 Canary all changes At one point, we wanted to push a caching configuration change. We were pretty sure that it would not lead to anything bad. But pretty sure is not 100% sure. Turns out, caching was a pretty critical feature for YouTube, and the config change had some unintended consequences that fully hobbled the service for 13 minutes. Had we canaried those global changes with a progressive rollout strategy, this outage could have been curbed before it had global impact. Read more about the canary strategy in this paper , and learn more in this video . Around the same timeframe, YouTube's slightly younger sibling, Google Calendar, also experienced an outage which serves as the backdrop for the next two lessons. 4 Have a "Big Red Button" A "Big Red Button" is a unique but highly practical safety feature: it should kick off a simple, easy-to-trigger action that reverts whatever triggered the undesirable state to (ideally) shut down whatever's happening. "Big Red Buttons" come in many shapes and sizes—and it's important to identify what those big red buttons might be before you submit a potentially risky action. We once narrowly missed a major outage because the engineer who submitted the would-be-triggering change unplugged their desktop computer before the change could propagate. So when planning your major rollouts, consider What is my big red button? Ensure every service dependency has a "big red button" to exercise in an emergency. See "Generic Mitigations" for more! 5 Unit tests alone are not enough - integration testing is also needed Ahh.... unit tests. They verify that an individual component can perform the way we need it to. Unit tests have intentionally limited scope, and are super helpful, but they also don't fully replicate the runtime environment and productionized demands that might exist. For this reason, we are big advocates of integration testing! We can use integration tests to verify that jobs and tasks can perform a cold start. Will things work the way we want them to? Will components work together the way we want them too? Will these components successfully create the system we want them to? This lesson was learned during a Calendar outage in which our testing didn't follow the same path as real use, resulting in plenty of testing... that didn't help us assess how a change would perform in reality. Shifting to an incident that happened in February 2017, we find our next two lessons. First, unavailable OAuth tokens caused millions of users to be logged out of devices and services, and 32,000 OnHub and Google WiFi devices to perform a factory reset. Manual account recovery claims jumped by 10x because of failed logins. It took Google about 12 hours to fully recover from the outage. 6 COMMUNICATION CHANNELS! AND BACKUP CHANNELS!! AND BACKUPS FOR THOSE BACKUP CHANNELS!!! Yes, it was a bad time. You want to know what made it worse? Teams were expecting to be able to use Google Hangouts and Google Meet to manage the incident. But when 350M users were logged out of their devices and services... relying on these Google services was, in retrospect, kind of a bad call. Ensure that you have non-dependent backup communication channels, and that you have tested them. Then, the same 2017 incident led us to better understand graceful degradation: 7 Intentionally degrade performance modes It's easy to think of availability as either "fully up" or "fully down" ... but being able to offer a continuous minimum functionality with a degraded performance mode helps to offer a more consistent user experience. So we've built degraded performance modes carefully and intentionally—so during rough patches, it might not even be user-visible (it might be happening right now!). Services should degrade gracefully and continue to function under exceptional circumstances. This next lesson is a recommendation to ensure that your last-line-of-defense system works as expected in extreme scenarios, such as natural disasters or cyber attacks, that result in loss of productivity or service availability. 8 Test for Disaster resilience Besides unit testing and integration testing, there are other types of very important testing: disaster resilience and recovery testing. While resilience testing verifies that your service or system could survive in the event of faults, latency, or disruptions, recovery testing verifies that your service can transition back to homeostasis after a full shutdown. Both should be critical pieces of your business continuity strategy—as described in "Weathering the Unexpected" . A useful activity can also be sitting your team down and working through how some of these scenarios could theoretically play out—tabletop game style. This can also be a fun opportunity to explore those terrifying "What Ifs", for example, "What if part of your network connectivity gets shut down unexpectedly?". 9 Automate your mitigations In March of 2023, a near-simultaneous failure of multiple networking devices occurred in a few datacenters, resulting in a widespread packet loss. In this 6-day outage, an estimated 70% of services experienced varied levels of impact, depending on the location, service load, and configuration at the time of network failure. In such instances, you can reduce your mean time to resolution (MTTR), by automating mitigating measures done by hand. If there's a clear signal that a particular failure is occurring, then why can't that mitigation be kicked off in an automated way? Sometimes it is better to use an automated mitigation first and save the root-causing for after user impact has been avoided. 10 Reduce the time between rollouts, to decrease the likelihood of the rollout going wrong In March of 2022, a widespread outage in the payments system prevented customers from completing transactions, resulting in the Pokémon GO community day being postponed. The cause was the removal of a single database field, which should have been safe as all uses of that field were removed from the code beforehand. Unfortunately, a slow rollout cadence of one part of the system meant that the field was still being used by the live system. Having long delays between rollouts, especially in complex, multiple component systems, makes it extremely difficult to reason out the safety of a particular change. Frequent rollouts —with the proper testing in place— lead to fewer surprises from this class of failure. 11 A single global hardware version is a single point of failure Having only one particular model of device to perform a critical function can make for simpler operations and maintenance. However, it means that if that model turns out to have a problem, that critical function is no longer being performed. This happened in March 2020 when a networking device that had an undiscovered zero-day bug, encountered a change in traffic patterns that triggered that bug. As the same model and version of the device was being used across the network, a substantial regional outage ensued. What prevented this from being a total outage was the presence of multiple network backbones that allowed high-priority traffic to be routed via a still working alternative. Latent bugs in critical infrastructure can lurk undetected until a seemingly innocuous event triggers them. Maintaining a diverse infrastructure, while incurring costs of its own, can mean the difference between a troublesome outage and a total one. So there you have it! Eleven lessons learned, from two decades of Site Reliability Engineering at Google . Why eleven? Well, you see, Google Site Reliability, with our rich history, is still in our prime . Download as PDF Follow us About Google Google products Privacy Terms Help
2026-01-13T09:29:20
https://w3techs.com/technologies/cross/dns_server/ranking
Usage Survey of DNS Server Providers broken down by Ranking advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Technologies > DNS Servers > by Ranking Usage of DNS server providers broken down by ranking Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by ranking. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 32.7% of all the websites that rank in the top 1,000,000.     Cloudflare 15.4% 32.7% 41.2% 41.5% 42.0%     GoDaddy Group 10.1% 6.6% 3.5% 1.3% 0.1%     Newfold Digital Group 4.0% 1.9% 0.9% 0.4% 0.1%     W3Techs.com, 13 January 2026 Overall top 1,000,000 top 100,000 top 10,000 top 1,000 Percentages of websites using various DNS server providers broken down by ranking More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/overview/social_widget
Usage Statistics and Market Share of Social Widgets for Websites, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Usage History Market Share History Market Top Site Usage Market Position Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > Social Widgets Usage statistics and market shares of social widgets for websites Request an extensive social widgets market report . Learn more This diagram shows the percentages of websites using various social widgets. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. How to read the diagram: 76.7% of the websites use none of the social widgets that we monitor. WhatsApp is used by 13.0% of all the websites, that is a social widget market share of 55.8%. None 76.7% WhatsApp 13.0% 55.8% Facebook 11.2% 48.0% Twitter/X 9.9% 42.7% Pinterest 4.7% 20.0% LinkedIn 4.4% 18.9% AddToAny 0.9% 4.1% Reddit 0.8% 3.6% Tumblr 0.6% 2.7% VKontakte 0.6% 2.4% Telegram 0.4% 1.6% ShareThis 0.2% 0.8% Line 0.2% 0.6% Bluesky 0.1% 0.4% StumbleUpon 0.1% 0.4% Xing 0.1% 0.2% Shareaholic 0.1% 0.2% Yandex less than 0.1% 0.2% UpToLike less than 0.1% 0.2% Odnoklassniki less than 0.1% 0.2% Buffer less than 0.1% 0.2% Threads less than 0.1% 0.1% Weibo less than 0.1% 0.1% Pluso less than 0.1% 0.1% MySpace less than 0.1% 0.1% LiveJournal less than 0.1% 0.1% Mastodon less than 0.1% 0.1% W3Techs.com, 13 January 2026 absolute usage percentage market share Percentages of websites using various social widgets Note: a website may use more than one social widget The following social widgets have a market share of less than 0.1% Hacker News Baidu Share Mixi Signal WeShare Mail.Ru Diigo Slashdot DZone Balatarin Fark Is there a technology missing? Registered users can make a proposal to add a technology. Do you want to stay informed about this survey? Use our monthly technology survey RSS Feed . Registered users can also subscribe to a monthly technology survey email. Share this page Technology Brief Social Widgets Social widgets are small programs integrated into a website that allow visitors to interact with some form of social service in order to share information about the website with a group. We include only widgets that support sharing, we don't include links to site-specific pages and follow-me widgets. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more No related forum entry yet social widgets forum advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-wix
Usage Statistics and Market Share of Wix as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Technology Changes Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > Wix Usage statistics of Wix as DNS server provider Request an extensive Wix market report. Learn more These diagrams show the usage statistics of Wix as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. Wix is used as DNS server provider by 3.7% of all the websites. Historical trend This diagram shows the historical trend in the percentage of websites using Wix. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of Wix compared to all other DNS server providers in our Wix market report . Market position This diagram shows the market position of Wix in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using Wix Admaster.cc Fcmobileforum.com Blacktoonurl2.com Winbox88my4.com Ascuolaoggi.com Blacktoon365.com Eattolivenottodie.com Cuidadoconelperro.com.mx Random selection of sites using Wix The-campus.info Atelier-feuillades.fr Ortholuxehome.ca Badgercanyontea.com Beauty-house.no Sites using Wix only recently Hpcz.org.zm Hospex.in 주소콘.com Nufcblog.com Releasesky.xyz More examples of sites You can find more examples of sites using Wix in our Wix market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of Wix with Jimdo and JogjaCamp and BanaHosting . Free technology usage monitoring service Get a notification when a top site starts using Wix. Share this page Technology Brief Wix Category: DNS Server Providers Wix is an online platform for creating websites. Website: wix.com Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/overview/site_element
Usage Statistics of Site Elements for Websites, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends History Market Top Site Usage Market Position Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > Site Elements Usage statistics of site elements for websites This diagram shows the percentages of websites using various site elements. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. How to read the diagram: 2.4% of the websites use none of the site elements that we monitor. CSS is used by 92.9% of all the websites. None 2.4% CSS 92.9% Compression 88.4% Default protocol https 86.7% Cookies 40.4% Default subdomain www 37.7% HTTP/3 36.6% HTTP/2 33.8% HTTP Strict Transport Security 31.3% IPv6 28.5% ETag 26.5% QUIC 8.2% Frameset 0.2% W3Techs.com, 13 January 2026 Percentages of websites using various site elements Note: a website may use more than one site element The following site elements have a market share of less than 0.1% SPDY Is there a technology missing? Registered users can make a proposal to add a technology. Do you want to stay informed about this survey? Use our monthly technology survey RSS Feed . Registered users can also subscribe to a monthly technology survey email. Share this page Technology Brief Site Elements Site elements are optional technical properties or features of websites. Latest related posting   read all Web technology fact of the day 9 May 2025 38.9% of all website redirect to their www subdomain . That's roughly the same as 5 years ago. 63.1% of the top 1000 sites do so. » more Latest related forum entry   read all Technology proposal: Jerry October 1 March 2025 » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/overview/server_location
Distribution of Server Locations of Websites, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends History Market Top Site Usage Market Position Performance Page Speed Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Content Languages see FAQ for explanations on advanced reports Technologies > Server Locations Distribution of websites per server location This diagram shows the percentages of websites using various server locations. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. How to read the diagram: United States is used by 33.5% of all the websites whose server location we know. United States 33.5% Germany 13.8% Japan 6.2% France 5.3% Netherlands 3.8% Russian Federation 3.5% United Kingdom 2.9% Italy 2.7% Brazil 2.1% India 2.1% Poland 2.0% Singapore 1.7% Spain 1.5% Turkey 1.3% Canada 1.2% Czech Republic 1.1% China 1.1% Iran 1.1% Australia 1.0% South Korea 1.0% Belgium 1.0% Viet Nam 0.9% Switzerland 0.9% Ireland 0.9% Denmark 0.8% Finland 0.8% Indonesia 0.7% Ukraine 0.7% Romania 0.6% Hungary 0.5% Sweden 0.5% South Africa 0.5% Slovakia 0.4% Lithuania 0.4% Austria 0.4% Bulgaria 0.3% Taiwan 0.3% Thailand 0.3% Argentina 0.3% Portugal 0.3% Belarus 0.2% Malaysia 0.2% Israel 0.2% Chile 0.2% Kazakhstan 0.2% Estonia 0.2% Slovenia 0.2% Norway 0.2% Croatia 0.1% Greece 0.1% Serbia 0.1% New Zealand 0.1% Bangladesh 0.1% Latvia 0.1% Mexico 0.1% W3Techs.com, 13 January 2026 Percentages of websites using various server locations Note: a website may use more than one server location The following server locations are used by less than 0.1% of the websites Moldova United Arab Emirates Uzbekistan Iceland Luxembourg Georgia Colombia Bosnia and Herzegovina Saudi Arabia Nepal Peru Uruguay Tunisia Pakistan Mongolia Kyrgyzstan Kenya Ecuador Philippines Bahrain Algeria Egypt Azerbaijan Costa Rica Tanzania North Macedonia Armenia Paraguay Cyprus Seychelles Bolivia Nigeria Morocco Venezuela Jordan Sri Lanka Turkmenistan Ethiopia Albania Qatar Oman Rwanda San Marino Tajikistan Panama Palestine Syria Cambodia Ghana Libya Kuwait Zambia Guatemala Cuba Myanmar Angola Lebanon Belize Bhutan Uganda Namibia Mozambique Senegal Cote d'Ivoire Mauritius Zimbabwe Andorra Honduras Dominican Republic Malta El Salvador Monaco Montenegro Nicaragua Cameroon Iraq Brunei Darussalam Burkina Faso Malawi Liechtenstein Laos Botswana Yemen Madagascar Jamaica Afghanistan Kosovo Maldives Timor-Leste Bahamas Mali Benin Holy See (Vatican City State) Mauritania Cape Verde Suriname Togo Papua New Guinea Trinidad and Tobago Curaçao Fiji Lesotho Saint Vincent and the Grenadines Barbados Eswatini Gabon Somalia Samoa North Korea Democratic Republic of the Congo Djibouti Sudan Burundi Niger Gambia Tonga Saint Lucia Vanuatu Guyana Solomon Islands Guinea Haiti Republic of the Congo Dominica Grenada Sao Tome and Principe Antigua and Barbuda Equatorial Guinea Liberia Sierra Leone Chad Palau Sint Maarten Wallis and Futuna Islands Antarctica Federated States of Micronesia Guinea-Bissau Saint Pierre and Miquelon Comoros Marshall Islands Nauru Saint Kitts And Nevis South Sudan Central African Republic Eritrea Kiribati Mayotte Is there a technology missing? Registered users can make a proposal to add a technology. Do you want to stay informed about this survey? Use our monthly technology survey RSS Feed . Registered users can also subscribe to a monthly technology survey email. Share this page Technology Brief Server Locations The server location of a website is the country where the server hosting the site is located. Latest related posting   read all Web technology fact of the day 26 November 2024 13.9% of websites are hosted in Germany , and 38.9% of the top 1,000 sites. » more No related forum entry yet server locations forum advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-xserverjp
Usage Statistics and Market Share of XServer as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Technology Changes Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > XServer Usage statistics of XServer as DNS server provider Request an extensive XServer market report. Learn more These diagrams show the usage statistics of XServer as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. XServer is used as DNS server provider by 1.3% of all the websites. Historical trend This diagram shows the historical trend in the percentage of websites using XServer. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of XServer compared to all other DNS server providers in our XServer market report . Market position This diagram shows the market position of XServer in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using XServer Omikuji-do.com Lit.link Xserver.ne.jp Kobe-np.co.jp used on subdomain Mamahiroba.com You1news.com Zekamashi.net Buzzap.jp Soundeffect-lab.info Ai-novel.com Random selection of sites using XServer Fukusij-recruit.com Ordermadekitchen.com Shiawase-sozoku.online Lli-publishing.com Tei-clinic.com Sites using XServer only recently Soco-st.com モンハンワールド攻略.com Seagaia.co.jp Moeyo.com Tombow.com More examples of sites You can find more examples of sites using XServer in our XServer market report , or you can request a custom web technology market report . Free technology usage monitoring service Get a notification when a top site starts using XServer. Share this page Technology Brief XServer Category: DNS Server Providers XServer is a Japanese web hosting provider. Website: xserver.ne.jp advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://goo.gle/mobaa-vector
Google SRE - Methods for vector display of internet artifacts Site Reliability Engineering Jump to Content Home Resources Latest resources Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Distributed PubSub Distributed Image Server The Art of SLO Latest resources Resources overview Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Books overview Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa Mobaa overview 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Classroom overview Distributed PubSub Distributed Image Server The Art of SLO Books Careers Cloud Local Prodcast Spotlight Site Reliability Engineering Jump to Content Methods for Vector Display of Internet Artifacts You Know The Rules And So Do I By Štěpán Davidovič with Salim Virji One way to learn about distributed systems in depth is to creatively misuse them, and find the limits of their flexibility. Thus, one year when April Fool's came along, I took the opportunity to do just that - Google as a whole has quite a history of April Fools' jokes. I am a frequent user of Monarch , Google's planet-scale monitoring system, and Panopticon (or PCon), its user interface. PCon displays monitoring graphs that contain up to 127 lines of information on how your system is doing. Monarch collects and stores metric data, and allows arbitrary querying using a rich query language: the python-based, domain-specific language called Mash. Google Cloud users may also recognize the Monarch Query Language . A Mash query looks like so: Fetch(Raw('example.production.Servers', '/my/service/errors'), {'user': 'stepand', 'server': 'rickroll_server'}) | Window(Align('1h')) | GroupBy([], Sum()) Now I wanted to take this Monarch system and creatively misuse it-that is, use it in an unusual way. What better way to do this than-wait for it-Rickroll Monarch? For those that don't know, " Rickrolling is a bait-and-switch prank that involves posting a hyperlink that is supposedly relevant to the topic at hand . . . but redirects the viewer to the music video of Never Gonna Give You Up, a 1987 dance-pop single by English singer-songwriter Rick Astley." So let's put Monarch and Rickroll together! How can we Rickroll Monarch? Aside from the April Fool's joke, serious learning occurred, too. This project taught me about problem decomposition and being principled at cutting corners in exactly the right places. It helped me move from instinctive to deliberate when explaining why I felt this or that corner should be cut. And it gave me more familiarity with our monitoring system. And this prank was also a good opportunity to practice a thing or two about graphics and animation, subjects I find interesting in their own right. Let's get started Okay, so how do you get Rick Astley to show up dancing on your monitoring dashboard? From the start, I wanted to use lines on Panopticon-that is, treat it as a vector display. It's a natural choice, given that in Panopticon, everything is a line. If we're doing vectors, what problems do we have to solve? Get the actual Rick Astley video. Turns out, this is not a big problem. There are a bunch of videos on the internet. Phew, I was worried for a moment! Turn the video into vectors. I'm okay with outlines, so we need one silhouette per each frame of animation. Take the silhouette and write it into Monarch correctly. There's a bunch of limitations to that. Display the animation at a sufficient refresh rate so that it actually looks like animation to people. Source: Video capture from https://www.youtube.com/watch?v=dQw4w9WgXcQ Bonus points : Don't melt Monarch in the process! Everything done here is an abuse of the system, of course, so I want to be careful. I need to learn a lot about the system first From Rick Astley Video to Rick Silhouete If we look at the Rick Astley video, we can see there's a lot going on: there's a complex background, a microphone, and of course, Mr. Astley is moving around quite a bit My first thought was rotoscoping it. I've done that a few times for some amateur movie shots. It isn't too hard, but it takes a lot of effort-nope, not gonna happen. My second thought was, has someone done this already? As a matter of fact, yes, someone did: Google! A few years back, Google Rickrolled the internet with a Rickroll in Webdriver Torso. Ok, we've got a high contrast silhouette now. Still no vectors, but much easier to turn into vectors. First things first. Let's download the video from https://www.youtube.com/watch?v=klqi_h9FElc and turn it into a series of frames: $ mkdir frames $ ffmpeg -i klqi_h9FElc.mkv -r 4 frames/output_%04d.png Now that I have each individual frame, I only care about the red part of every frame (thanks, whoever made this Webdriver Torso video!). Shell script using Imagemagick makes quick work of that and leaves only the parts I care about. I found this Imagemagick invocation on the internet and just adjusted the colors: $ cd frames $ mkdir processed $ for i in *.png; do convert "$i" -level 25%,75% -fill white -fuzz 10% +opaque "#f90000" "processed/$i.pnm"; done Notice the ".pnm" suffix. This suffix indicates that I want the files to be processed by the next tool in the pipeline: potrace . It only accepts a few input formats. The resulting images may not be pretty, but they'll do. There are various minor blotches of compression artifacts, which I'm sure I could remove somehow, but these are corners I can easily cut later, so I didn't bother removing them now In my experience, Potrace is probably the fastest, easiest way to convert an outline into vectors, for simple shapes. I frequently use Potrace in graphics work, to get SVG into Inkscape. However, SVG is kind of awkward to work with, and I have many more steps left. Remember, I need to get this data into Monarch, which still requires a lot more processing. Also, Potrace generates Bezier curve definitions, which I'd need to turn into individual data points. Handling SVG and Bezier curves is a lot of work. Can I do something easier? This is when I notice that Potrace supports " GeoJSON ", a format I've never heard of before, but it conveniently dumps data into JSON. In addition, GeoJSON approximates Bezier curves by eight straight line segments. Score! Double victory! Next, let's use CSV as a simple data interchange format. My data format is a CSV with three columns: frame number, point x coordinates, and point y coordinates. A single frame has a single silhouette, and the points are the outline of that silhouette. I run a simple Python script and get this generated! import csv import json import subprocess import sys with open('result.csv', 'w') as output_csv: writer = csv.writer(output_csv, lineterminator='\n') for index, fn in enumerate(sys.argv[1:]): print '[%d] Tracing PNM %s' % (index, fn) with open(fn) as input_pnm: p = subprocess.Popen(['/usr/bin/potrace', '-b', 'geojson'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, close_fds=True) stuff_from_pipe = p.communicate(input_pnm.read())[0] p.wait() print '[%d] Writing frame' % index data = json.loads(stuff_from_pipe) for item in data['features'][0]['geometry']['coordinates'][0]: writer.writerow([index]+item) Woohoo, I got my CSV file with the animation! From Rick Silhouete to Monarch Astley I have a vector silhouette now, and my goal is to get it into the monitoring system - clearly a very reasonable goal. When writing to Monarch, you need to go from left to right, from the oldest data point to the youngest. Your next write must be for a later timestamp than the previous write, or Monarch rejects it. We need to follow this directionality requirement, instead of simply adding points clockwise around the silhouette and calling it a graph. This requires some code. This code finds the leftmost point in the silhouette, and traverses clockwise and counterclockwise to find the longest segment, where the next point is not more left than the preceding point. Once such a segment is found, the segment is split off as an independent line, and we repeat the process all over again until we run out of lines. We then write those lines to Monarch as independent streams, with points ordered from left to right. For testing, I add the option to dump this output into SVG, with a new color for each line. This helps to find various bugs in the algorithm, and nicely visualize what's going on. Next, I use a library designed for bulk writes into Monarch. In order to keep the data separate from everything else, I use custom root labels, custom schema, and a custom metric. The metric is /experimental/stepand/rickroll/never_gonna_give_you_up, schema is experimental.users.stepand.and.RickAstley. Each picture now has width, which is equivalent to a time duration. It also has height, which is the value of the time series. Originally, I'd wanted to bulk-write one frame per second into Monarch, slightly into the future according to the PCon viewport. Each second would have been the one frame I'm interested in, and I would have narrowed the PCon viewport to a one-second width. Unfortunately, this was not possible. Streamz API prevented me from writing two data points for the same metric, in the same write. I also could not write quickly, because I was hitting issues with reordered writes (recall Monarch wants them in chronological order) and my naive implementation couldn't write fast enough because of this requirement. I still have other options: 1) Work around the reordering problem by writing one data point per one stream. 2) Create new targets or new metric fields. 3) Find another solution to the whole animation question. I start with the first option but quickly realize that the total number of points for all frames is around 20,000. Having one stream per data point is a very inefficient use of Monarch, because it is a mismatch to a typical time series (with one stream running for a long time), and goes counter to the physical storage model. I conclude that I can't render a frame very fast this way. Therefore, the other solution is to write slowly. Writing the entire image takes about 20 minutes now, and the viewport is one hour wide. Instead of writing just one frame, it now writes all frames. Each frame has a different target, so they can be filtered individually. On-Screen, Data When does PCon connect two dots, and when does it leave a gap? Through trial and error, I learned that PCon looks at the last Window() statement in the query. Therefore, I use this to interpolate points and ensure lines are continuous, even if the original vectorization has a long gap. This is kind of awkward, since it increases the number of points written. However, it leads to a substantial increase in picture quality. This is also very useful for the lettering (see section, Writing On The Wall ), and saves me much manual labor by making it legible. Animate! Okay, on to the last step: how do we animate? Well, PCon has a feature where it automatically refreshes the graph as often as you'd like. We write individual frames to Monarch, and set the refresh interval to one second. The last thing we need is a way to decide which frame to render. Essentially, we need a clock with a one-second precision. This simple query gives us the number of seconds since the start of the hour: Fetch(Raw('experimental.users.stepand.and.RickAstley', '/presence/found'), {'give': 0, 'gonna': 0, 'never': 'global', 'up': 0, 'you': 0}) | Filter(False) | Window(Align('1h'), '1h') | GroupBy(['never', 'gonna', 'give'], PickAny()) | JoinWithLiteralTable(target_schema_name='experimental.users.stepand.and.RickAstley', fields=('never', 'gonna', 'give'), streams=[('global', 0, 0, True)], input_default=True) | Point(TimestampMicros()) | Window(Align('1h')) | Point(Floor(VAL / 1000000) % 3600) We use this query and join it with the stored data, and only filter the frame we want. Simply take the stream value modulo number of frames (which is 21), turn it into a field, and join it against the original data. It turns out, we can also use this query to solve another problem. When displaying time series, we always see only a limited time window. After a while the image starts to slide to the left, out of the left edge of the window, and we only get a part of Rick Astley. By using time shift , however, we don't even need to write twice: we just move a single set of frames to the right! Therefore, we use this timing information to decide whether to shift right or not. The entire animation is approximately 30 minutes wide (quite a great width unit)! When the leftmost edge of the data hits the leftmost edge of the viewed window, we just shift it by 30 minutes to the right. After another 30 minutes, we remove the shift, but load data from the new data push. Here's how it looks, all pulled together: check out the result here ! Writing On The Wall I worry that the Rick Astley silhouette may not be immediately recognizable, so I decide to add some writing to drive the point home. Now that we have a general vector display, the lettering is really the easiest part. I manually prepare a bunch of letters as vector paths, and then the code scales them and writes them to Monarch, just like the other vectors. There are just minor improvements, then, to the lettering over time, to make it more legible. Final Thoughts This was a fun project, and there were many details I had to solve which I don't go into here. Pcon Rickroll was a good exercise in project decomposition: how to go from a bizarre and ambiguous problem statement, to concrete subproblems and their concrete solutions, while staying within the limited time budget I'm willing to dedicate to a joke. It helped me be more principled in identifying where to cut corners to maximize impact while minimizing effort: I could have rotoscoped the whole video, had a more robust vectorization, or better graphing-a colleague recommended using histograms, which can be used to construct grayscale bitmaps. However, recognizing what was "good enough" for each step helped me actually pull off this April Fool's joke. I could not spend an infinite amount of time, so this strategy made the difference between making this project feasible or not. On a final note, I'd like to thank the outstanding engineers working on Monarch. The Monarch system is one of the most exciting services I have the pleasure to use. Follow us About Google Google products Privacy Terms Help
2026-01-13T09:29:20
https://w3techs.com/technologies/overview/traffic_analysis
Usage Statistics and Market Share of Traffic Analysis Tools for Websites, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Usage History Market Share History Market Top Site Usage Market Position Performance Page Speed Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > Traffic Analysis Tools Usage statistics and market shares of traffic analysis tools for websites Request an extensive traffic analysis tools market report . Learn more This diagram shows the percentages of websites using various traffic analysis tools. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. How to read the diagram: 43.5% of the websites use none of the traffic analysis tools that we monitor. Google Analytics is used by 44.6% of all the websites, that is a traffic analysis tool market share of 79.0%. None 43.5% Google Analytics 44.6% 79.0% Meta Pixel 9.3% 16.4% WordPress Jetpack 4.0% 7.0% Yandex.Metrica 3.7% 6.5% Microsoft Clarity 3.5% 6.3% Hotjar 2.1% 3.7% MonsterInsights 1.8% 3.1% Cloudflare Web Analytics 1.6% 2.8% Matomo 1.5% 2.7% Microsoft UET 1.3% 2.2% Snowplow 1.2% 2.2% New Relic 0.7% 1.3% TikTok Pixel 0.7% 1.2% LinkedIn Insight Tag 0.6% 1.1% HubSpot 0.6% 1.0% WP Statistics 0.5% 1.0% LiveInternet 0.5% 0.9% Top.Mail.Ru 0.5% 0.8% Full Circle Studies 0.4% 0.8% Quantcast 0.4% 0.7% StatCounter 0.3% 0.6% CrazyEgg 0.3% 0.5% Baidu Analytics 0.3% 0.5% Ahrefs Web Analytics 0.3% 0.5% Plausible 0.2% 0.4% Pinterest Tag 0.2% 0.4% Histats 0.2% 0.3% Mouseflow 0.2% 0.3% Visual Website Optimizer 0.2% 0.3% Leadfeeder 0.1% 0.2% Clicky 0.1% 0.2% PostHog 0.1% 0.2% Lucky Orange 0.1% 0.2% Rambler 0.1% 0.2% Mixpanel 0.1% 0.2% Umeng 0.1% 0.2% Gauges 0.1% 0.2% Heap 0.1% 0.2% Amplitude 0.1% 0.1% Segment 0.1% 0.1% Koko Analytics 0.1% 0.1% Adobe Analytics 0.1% 0.1% Lotame 0.1% 0.1% Twitter/X tracking 0.1% 0.1% Piwik Pro 0.1% 0.1% Contentsquare 0.1% 0.1% Smartlook 0.1% 0.1% Optimizely 0.1% 0.1% ShinyStat 0.1% 0.1% TOPlist less than 0.1% 0.1% Siteimprove less than 0.1% 0.1% Ezoic less than 0.1% 0.1% Etracker less than 0.1% 0.1% Piano less than 0.1% 0.1% Inspectlet less than 0.1% 0.1% Chartbeat less than 0.1% 0.1% FullStory less than 0.1% 0.1% W3Techs.com, 13 January 2026 absolute usage percentage market share Percentages of websites using various traffic analysis tools Note: a website may use more than one traffic analysis tool The following traffic analysis tools have a market share of less than 0.1% Dynatrace Whos.amung.us HitWebCounter Simple Analytics 51.la Gemius Pirsch Parse.ly Navegg Pendo Flag Counter Nielsen Woopra eXTReMe Tracker Umami Twipla FC2 Analyzer GoatCounter AWeber Usermaven Open Web Analytics Urchin W3Counter Dreamdata GoSquared KISSmetrics HockeyStack Fathom ClustrMaps Monetate AFS Analytics Web-Stat Snoobi HitsLink Jentis Webtrends Counter.dev Publytics Finteza Opentracker Medallia DXA GoStats INFOnline Splitbee WiredMinds Cronitor Real User Monitoring Fusedeck Countly Hitsteps Mint UXWizz Webtrekk Marin Software phpMyVisites Ackee Tinylytics AdEmails One Dollar Stats Cross Pixel Swetrix CQ Counter KickFire TelemetryDeck 24Counter Analyzati Trendcounter SiteTracker 123Count Médiamétrie-eStat Specific Click Acoustic Tealeaf AuriQ Logaholic Weborama Top-Rank.pl Flags.es Nilly MapMyUser Is there a technology missing? Registered users can make a proposal to add a technology. Do you want to stay informed about this survey? Use our monthly technology survey RSS Feed . Registered users can also subscribe to a monthly technology survey email. Share this page Technology Brief Traffic Analysis Tools Web traffic analysis tools collect information about visits to a website, and present it to a website owner. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more Latest related forum entry   read all Technology proposal: WP Statistics 2 November 2024 » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://www.linkedin.com/uas/login?session_redirect=%2Fproducts%2Fleadsquared-marketing-automation&trk=products_details_guest_primary_call_to_action
LinkedIn Login, Sign in | LinkedIn Sign in Stay updated on your professional world. Email or phone Password Show Forgot password? Keep me logged in Sign in or By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in with Apple Sign in with a passkey We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional))
2026-01-13T09:29:20
https://www.linkedin.com/products/netscout-arbor-edge-defense/?trk=products_seo_search
Arbor Edge Defense | LinkedIn Skip to main content LinkedIn NETSCOUT in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Arbor Edge Defense DDoS Protection Software by NETSCOUT See who's skilled in this Add as skill Learn more Report this product About Arbor Edge Defense is an inline security appliance deployed at the network perimeter that can automatically detect and block inbound threats and outbound malicious communication using highly scalable, stateless technology and unique, global threat intelligence. This product is intended for Cyber Security Engineer Chief Executive Officer Chief Information Officer Network Operations Center Head of Security Security Engineer Director of Security Information Technology Specialist Cyber Security Specialist Media Products media viewer No more previous content Smart Perimeter Protection With NETSCOUT Arbor Edge Defense (AED) NETSCOUT Arbor Edge Defense (AED) blocks inbound threats such as DDoS attacks, and outbound communication from compromised internal hosts - acting as a first and last line of smart, automated, perimeter defense. Perimeter Defense Best Practices Using Arbor Edge Defense NETSCOUT Arbor Edge Defense acts as a first and last line of smart, automated perimeter defense for an organization. It’s deployed on-premise, where it acts as a first line of defense, by blocking inbound DDoS attacks, protecting stateful security devices. Demo: Blocking Ransomware Attack with Arbor Edge Defense Acting as a last line of defense, AED detects and blocks outbound indicators of compromise (or IoCs) that has been missed by other tools in your security stack to stop the proliferation of malware before a data breach. NETSCOUT’s Arbor Edge Defense: The First and Last Line of Defense Adam Bixler, Director, Product Management at NETSCOUT, discusses NETSCOUT AED’s unique functionality and how it augments and strengthens traditional endpoint security. Learn about the value of integrating threat intelligence and DDoS defense as a first and last line of defense. Visit the product page for more information: http://www.netscout.link/6001EPXxb Get VPN Protection with NETSCOUT Arbor Edge Defense A DDoS attack poses a major threat to the availability of the VPN gateway. As employees continue to work from home, protecting the availability of your VPN gateway from DDoS attacks is critical. Unlike a cloud-based DDoS protection solution, NETSCOUT Arbor Edge Defense is a stateless, on-premise solution that can instantaneously detect and mitigate DDoS attacks against the VPN gateway, enabling you to maintain home-based employee productivity and business continuity. No more next content Similar products Cloudflare DDoS Protection Cloudflare DDoS Protection DDoS Protection Software Cloudflare Spectrum Cloudflare Spectrum DDoS Protection Software Akamai Prolexic Routed Akamai Prolexic Routed DDoS Protection Software OVHcloud Anti-DDoS Protection OVHcloud Anti-DDoS Protection DDoS Protection Software Kona DDoS Defender Kona DDoS Defender DDoS Protection Software Kaspersky DDoS Protection Kaspersky DDoS Protection DDoS Protection Software Sign in to see more Show more Show less NETSCOUT products Arbor Cloud DDoS Protection Arbor Cloud DDoS Protection DDoS Protection Software Arbor Sightline Arbor Sightline Network Monitoring Software Arbor Threat Mitigation System (TMS) Arbor Threat Mitigation System (TMS) DDoS Protection Software InfiniStreamNG (ISNG) InfiniStreamNG (ISNG) Business Continuity Software nGenius Business Analytics nGenius Business Analytics Business Intelligence (BI) Software nGeniusONE nGeniusONE Application Performance Monitoring (APM) Software nGeniusPULSE nGeniusPULSE Network Management Software Omnis Threat Horizon Omnis Threat Horizon DDoS Protection Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:20
https://research.google.com/pubs/pub41318.html
Photon: Fault-tolerant and Scalable Joining of Continuous Data Streams Jump to Content Research Research Who we are Back to Who we are menu Defining the technology of today and tomorrow. Philosophy We strive to create an environment conducive to many different types of research across many different time scales and levels of risk. Learn more about our Philosophy Learn more Philosophy People Our researchers drive advancements in computer science through both fundamental and applied research. Learn more about our People Learn more People Research areas Back to Research areas menu Research areas Explore all research areas Research areas Back to Research areas menu Explore all research areas Foundational ML & Algorithms Algorithms & Theory Data Management Data Mining & Modeling Information Retrieval & the Web Machine Intelligence Machine Perception Machine Translation Natural Language Processing Speech Processing Foundational ML & Algorithms Back to Foundational ML & Algorithms menu Algorithms & Theory Data Management Data Mining & Modeling Information Retrieval & the Web Machine Intelligence Machine Perception Machine Translation Natural Language Processing Speech Processing Computing Systems & Quantum AI Distributed Systems & Parallel
Computing Hardware & Architecture Mobile Systems Networking Quantum Computing Robotics Security, Privacy, & Abuse
Prevention Software Engineering Software Systems Computing Systems & Quantum AI Back to Computing Systems & Quantum AI menu Distributed Systems & Parallel
Computing Hardware & Architecture Mobile Systems Networking Quantum Computing Robotics Security, Privacy, & Abuse
Prevention Software Engineering Software Systems Science, AI & Society Climate & Sustainability Economics & Electronic Commerce Education Innovation General Science Health & Bioscience Human-Computer Interaction and Visualization Responsible AI Science, AI & Society Back to Science, AI & Society menu Climate & Sustainability Economics & Electronic Commerce Education Innovation General Science Health & Bioscience Human-Computer Interaction and Visualization Responsible AI Our work Back to Our work menu Projects We regularly open-source projects with the broader research community and apply our developments to Google products. Learn more about our Projects Learn more Projects Publications Publishing our work allows us to share ideas and work collaboratively to advance the field of computer science. Learn more about our Publications Learn more Publications Resources We make products, tools, and datasets available to everyone with the goal of building a more collaborative ecosystem. Learn more about our Resources Learn more Resources Programs & events Back to Programs & events menu Shaping the future, together. Collaborate with us Student programs Supporting the next generation of researchers through a wide range of programming. Learn more about our Student programs Learn more Student programs Faculty programs Participating in the academic research community through meaningful engagement with university faculty. Learn more about our Faculty programs Learn more Faculty programs Conferences & events Connecting with the broader research community through events is essential for creating progress in every aspect of our work. Learn more about our Conferences & events Learn more Conferences & events Collaborate with us Careers Blog Search Home Publications Photon: Fault-tolerant and Scalable Joining of Continuous Data Streams Rajagopal Ananthanarayanan Venkatesh Basker Sumit Das Ashish Gupta Haifeng Jiang Tianhao Qiu Alexey Reznichenko Deomid Ryabkov Manpreet Singh Shivakumar Venkataraman SIGMOD '13: Proceedings of the 2013 international conference on Management of data, ACM, New York, NY, USA, pp. 577-588 Download Google Scholar Copy Bibtex Abstract Web-based enterprises process events generated by millions of users interacting with their websites. Rich statistical data distilled from combining such interactions in near real-time generates enormous business value. In this paper, we describe the architecture of Photon, a geographically distributed system for joining multiple continuously flowing streams of data in real-time with high scalability and low latency, where the streams may be unordered or delayed. The system fully tolerates infrastructure degradation and datacenter-level outages without any manual intervention. Photon guarantees that there will be no duplicates in the joined output (at-most-once semantics) at any point in time, that most joinable events will be present in the output in real-time (near-exact semantics), and exactly-once semantics eventually. Photon is deployed within Google Advertising System to join data streams such as web search queries and user clicks on advertisements. It produces joined logs that are used to derive key business metrics, including billing for advertisers. Our production deployment processes millions of events per minute at peak with an average end-to-end latency of less than 10 seconds. We also present challenges and solutions in maintaining large persistent state across geographically distant locations, and highlight the design principles that emerged from our experience. Research Areas Data Management Distributed Systems and Parallel Computing Learn more about how we conduct our research We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work. Our research philosophy Follow us About Google Google Products Privacy Terms Help Submit feedback ×
2026-01-13T09:29:20
https://www.linkedin.com/products/akamai-technologies-akamai-prolexic-routed/?trk=products_seo_search
Akamai Prolexic Routed | LinkedIn Skip to main content LinkedIn Akamai Technologies in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Akamai Prolexic Routed DDoS Protection Software by Akamai Technologies See who's skilled in this Add as skill Learn more Report this product About Prolexic solutions provide fully managed DDoS protection for your applications, data centers, and network infrastructure. Similar products Cloudflare DDoS Protection Cloudflare DDoS Protection DDoS Protection Software Cloudflare Spectrum Cloudflare Spectrum DDoS Protection Software OVHcloud Anti-DDoS Protection OVHcloud Anti-DDoS Protection DDoS Protection Software Kona DDoS Defender Kona DDoS Defender DDoS Protection Software Kaspersky DDoS Protection Kaspersky DDoS Protection DDoS Protection Software Arbor Cloud DDoS Protection Arbor Cloud DDoS Protection DDoS Protection Software Sign in to see more Show more Show less Akamai Technologies products Akamai Edge DNS Akamai Edge DNS Managed DNS Services Software Akamai Identity Cloud Akamai Identity Cloud Identity & Access Management (IAM) Software Akamai IoT Edge Connect Akamai IoT Edge Connect Internet of Things (IoT) Software Aura Managed CDN Aura Managed CDN Content Delivery Network (CDN) Software BOCC BOCC Live Streaming Software DNSi AuthServe DNSi AuthServe Managed DNS Services Software DNSi Big Data Connector DNSi Big Data Connector Managed DNS Services Software Enterprise Threat Protector Enterprise Threat Protector Secure Web Gateways Media Services Live Media Services Live Live Streaming Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:20
https://www.linkedin.com/products/unifonic-chatbot/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_image-click
Chatbot | LinkedIn Skip to main content LinkedIn Unifonic in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Chatbot Chatbot Software by Unifonic See who's skilled in this Add as skill Request demo Report this product About Unifonic Chatbot is a visual tool for building fully functional chatbots using a drag-and-drop creator. Users can build conversational flows with ease and flexibility in minutes without any coding required. It currently supports WhatsApp, Webchat, Twitter Direct Messenger, and Facebook Messenger. Media Products media viewer No more previous content Chatbot Chatbot Chatbot Chatbot Ar Chatbot Ar No more next content Similar products Blip Blip Chatbot Software RD Station Conversas RD Station Conversas Chatbot Software Customer Engagement Customer Engagement Chatbot Software Omnichat Omnichat Chatbot Software ELX Chatbot ELX Chatbot Chatbot Software ChatGuru ChatGuru Chatbot Software Sign in to see more Show more Show less Unifonic products Agent Console Agent Console Call Center Software Authenticate Authenticate Multi-Factor Authentication (MFA) Software Flow Studio Flow Studio No-Code Development Platforms Multichannel Campaigns Multichannel Campaigns Conversational Marketing Platforms Number Masking Number Masking Interactive Voice Response (IVR) Software Programmable Channels Programmable Channels SMS Marketing Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:20
https://w3techs.com/technologies/reportlist/dns_server
DNS Server Providers Market Reports provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Technologies > DNS Servers > Market Reports Overview DNS Server Providers Market Reports Our technology market reports are detailed monthly reports on the usage of DNS server providers. We offer the following types of reports. Overall Report This report covers all statistics of the whole industry ( 726 DNS server providers ) in much details. The reports comes as 20,550 pages PDF file and costs 999 Euro . Proceed here to learn more about the content of that report: DNS Server Providers Market Report Historical Usage Trends Report This report shows the monthly usage data of all 726 DNS server providers since November 2019 . The report comes as CSV file , ready for further processing or importing into Excel, and costs 299 Euro . Proceed here to learn more about the content of that report: DNS Server Providers Historical Usage Trends Report Historical Performance Trends Report This report shows the monthly performance data in terms of page speed of all 639 DNS server providers for which we have sufficient performance data since January 2023 . The report comes as CSV files , ready for further processing or importing into Excel, and costs 299 Euro . Proceed here to learn more about the content of that report: DNS Server Providers Historical Performance Trends Report Reports on specific DNS server providers These reports focus on specific DNS server providers. They come as PDF file and cost between 299  and  499 Euro , depending on the provider. Select a DNS server provider to learn more about the content of the specific reports: 1&1 Versatel 1-grid 101domain 1984 Hosting 1blu 1Gb.ru 20i 51dns.com 5G Networks A1 Hrvatska A1 Makedonija A1 Slovenija A1 Telekom Austria Group Abion Above.com Active 24 Adriahost Advanced Hosting Afraid Afrihost Akamai Akamai Group Alastyr Alestra Alfahosting Alibaba Alibaba Group All-inkl.com Amazon Amen Anexia Anexia Group ANS Antagonist Aplus.net Argeweb Arsys Artera Aruba Aruba Group ArvanCloud Asiatech AT&T Atman AttHost Automattic Avalon Axarnet AXC AXSpace AZDIG Azion BanaHosting Beget Beon Intermedia Beyond.pl Bezeq Bezeq Group Bigcommerce BigRock BigScoots BIT Bitcommand Blacknight Bluehost Boreus BrandShelter Brixly BuddyNS Bunny.net Cafe24 Camel Host Cargo catalyst2 cdmon CDNetworks Celeste Celeste Group Cellcom CentralNic CenturyLink Checkdomain CHML Chunghwa Telecom Cinc Claranet Claus Web Cloud86 Cloudflare CloudFloorDNS ClouDNS CloudOne Digital ColoCrossing Colombia Hosting Color Me Shop Colorful Box Combell Comcast Comodo Comvive ConoHa Constellix Contabo Contabo Group Converty Cpanel Creatium CrocWeb Crucial CSC CSL CtrlS Curanet Cyber_Folks Cyber_Folks Group cyon DanDomain Datacom DDoS-Guard DDS Dealer.com DealerOn Delta.bg deSEC Dewaweb DHH DHH DHH Croatia Dhosting DigiCert DNS Trust Manager DigiCert Group Digital Pacific DigitalOcean Dinahosting DNS Made Easy DNSEver DNSimple Dogado Dollarhost Domain The Net Domain.com domaindiscount24 DomaiNesia Domainfactory Domainhotelli Domains.co.za Domeneshop Dominios.pt DonDominio Dongee DonWeb Doteasy DotRoll DreamHost Dreamscape Networks Dynadot Dynu DZSecurity easyDNS easyDNS Group Easyhost Easyname Ebrand EDIS EKM ElCat Empretienda Enartia Group Encirca eNom Entorno Ergonet ETECSA EuroByte EuroDNS Exabytes Exabytes Group Excedo Exepto Exo Hosting ExonHost Exoscale Ezoic F5 FaithConnector FastComet Fasthosts FastVPS FirstVDS Flexbe Flexential Flexwebhosting Fluccs Fornex Forpsi Fozzy Free Free Pro FREEhost.com.ua Freemium.hu FutureSpirits Gabia Gandi Gcore General Registry Genesys Informatica Gigahost GigeNET GleSYS Globalhost GlobeHosting GMO GlobalSign GMO Internet GMO Internet Group GMO Pepabo GO54 GoCloudEasy GoDaddy GoDaddy Group Golemos Google Gransy Green GreenGeeks Group.one GTHost Güzel Hosting H88 Web Hosting Hawk Host Heart Internet Heberjahiz Heteml Hetzner Hexonet HiChina HitMe.pl Hitrost Hivelocity HKBN Hoasted home.pl Host Europe Host.it Hostafrica Hostafrica Group HostArmada Hostatom HostDime HostDL Hoster.by Hoster.kz Hosterion HosterPK Hostfactory HostFeat HostForWeb HostGator Hostinet Hosting Ireland Hosting Ukraine Hosting.cl Hosting.com Hosting.com Group Hosting.de Hosting.kr Hostinger HostingHouse Hostingpalvelu HostingRaja Hostiran Hostland Hostnet Hostneverdie HostPapa HostPapa Group Hostpoint HostPress Hostpro Hoststar Hosttech Hostwinds http.net Hurricane Electric Hypernode I'm Web i-host I3C IBM IBM Group ICDSoft ICONZ-Webvisions IDC Frontier IDCloudHost IdeaSoft iHouseweb IHS Iliad Group Imperva iNames IndiaMART iNET Inetmar Infomaniak Inleed InMotion Instra Integrity Internet Initiative Japan Internet Thailand Internet.BS Interneto Vizija InterNetX InterServer INWX IONOS IP.gr IpHost IPServerOne IQ PL Iranserver Irish Domains Iron Hosting Centre ironDNS IServ Isimtescil iTopPlus iwantmyname Janela Digital Jetserver Jimdo Jino JogjaCamp Kagoya Kakao Kebirhost Keliweb Kenlo KeurigOnline Key-Systems KingHost KnownHost Krystal Latinoamérica Hosting LCN Ledl.net LetsHost Level27 LH.pl Liberty Global Group LimooHost Linkeo Linode LinQhost Linux.pl Liquid Web LiquidNet LiveDNS Lnw Loading Locaweb Lolipop Loopia Louhi LPMotor LucusHost LWS Group LWSA Group Lyrical Host M247 Magic Online Magyar Hosting Mail.Ru Majordomo MakeShop Japan MakeShop Korea Managed IP Manitu MarkMonitor Master Internet Masterhost Masterweb Mắt Bão Maxcluster McHost mCloud Mediacenter Megagroup Mehost Metanet Mi.com.co Microsoft Mihan mijn.host Mijndomein MilesWeb MiroHost Mittwald Mixhost Mizbanfa MKhost MochaHost MojoHost Monarobase Moniker Mono Solutions MyDevil.net myLoc Name Hero Name.com NameBright Namecheap Names.co.uk Nameshield NameSilo Namespace NameWeb Natro Naver Nazwa.pl Neoserv Net Chinese Netafraz NetAngels NetArt Group NetCologne netcup NetEase Nethouse Netinternet Netmark Netsons Network Solutions Neubox Newfold Digital Newfold Digital Group Nexcess Nexigen Digital Nexylan Nhan Hoa Nicalia Nimbus Hosting Nine Internet Solutions No-IP Nomeo Nominalia Nova Novatrend NS1 Ntirety Nuthost o12.pl o2switch Octenium Oderland okITup Omnis Network Onamae One.com OnlyDomains Openprovider OpenSRS Opti9 Oracle OVH PA Vietnam Pair Networks Panthur Papaki Pars Parva System Patmos Piensa Solutions Planeetta PlanetHoster Play PlusServer PointDNS Porkbun Profihost Prom.ua Proserve PS Internet Company PTisp PublicDomainRegistry QUIC.cloud R01 RackForest Rackhost Rackspace Radcom Raidboxes Raiola Networks Rakko Rapidenet Raya Negar RBC Group RcodeZero Real Geeks REG.RU Register.com Register.it Register365 Registro.br ResellerClub Ride Rochen Romarg Root.lu RU-Center Rumahweb SabaHost Safaricom Sakura Salla Savvii Sazito ScalaHosting Scaleway ScanNet SchlundTech Seeweb Selectel Selly Seohost Serverel ServerFreak Serverplan Servers.com Setcor SevenHost Sfera Sharktech Shift4Shop Shock Hosting Shock Media Shop-Express Shoper Shopify Shoptet Signet Simply Transit Simply.com Simplyhosting SITE123 SiteGround SiteHost SiteLock SiteSell Sitezoogle Skynova Smarthost Smileserv Softtr Spaceship SpaceWeb Sprinthost Squarespace StableHost Strato Sucuri Sunrise SuperHosting.BG Superspace SupportHost SvetHostingu Swizzonic Takeaway Tárhely.eu Team Internet team.blue Telecom Algeria Teléfonos de México Telehouse Telekom Austria Telemach Telenet Tencent Thai National Telecom The Producers THINline Ticimax TierPoint TierraNet Tilda TimeWeb Timme Hosting TMDHosting Top.Host Topsec Total Uptime Totohost Tradeindia TransIP Trillion Group Truehost Tucows Tucows Group Turbify TWNIC UK Servers UKDedicated UltaHost Umbler UNAS United Domains United Group United Internet Uniti Universo Online Uniweb Unlimited.rs UpCloud uPress Váš Hosting Vedos Vercara UltraDNS Vercel Veridyen Verizon Verizon Group Verpex Versio VHosting Vianova Vianova Group Viettel Vigbo Vimexx Virgin Media Visualsoft Vivacom VNET VNPT Volusion Voyager VSHosting VTX Telecom Vultr Wannafind we22 Web Hosting Canada Web.com Web.de Web24 Web4U Webempresa Webglobe webgo Webhost1 WebHostingBuzz Weblium Webnames Webnames WebNIC Website World WebSupport Webtasy Webzi Webzilla WIIT Group Wiroos Wix World4You WorldwideDNS WPMU Dev WPX Hosting XBT Holding XinNet Xneelo XServer Xtudio Networks Yahoo Yandex YouCan.shop Your.Online Yourhosting Zenbox Zoho Zomro Zone Media ZoneEdit Zoner a.s. Zoner Oy About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://uk.linkedin.com/company/fuse-uk
Fuse | LinkedIn Skip to main content LinkedIn Articles People Learning Jobs Games Sign in Join for free Fuse Marketing Services London, England 37,315 followers A global sports & entertainment agency See jobs Follow View all 332 employees Report this company About us Culturally Connected, Seriously Effective. A global agency with local expertise, connecting brands to the things that matter most to their audiences in culture through partnerships in sport, music, film & television, gaming & esports. These connections make our clients more memorable - the key to unlocking effectiveness. Powered by Omnicom Media Group, we bring an extra edge to strategy, execution, and measurement. Our work has won numerous awards, and the industry frequently recognises our people as top performers in their field. We place great importance on trusted relationships, sound moral judgement, and strong governance, all delivered through transparent processes. We have a fantastic team of 120 in our London HQ and 300 more across offices worldwide who are committed to creating an agency that is passionate, driven and lives by our shared values in an environment that allows individuals to thrive in their careers. If you are looking for an agency to help build your brand through the power of sport and entertainment, or a collaborative and progressive place to build your career, please get in touch: hello@fuseint.com Website http://www.fuseint.com/ External link for Fuse Industry Marketing Services Company size 51-200 employees Headquarters London, England Type Public Company Founded 2008 Specialties Sports Marketing, Entertainment Marketing, Data & Insights, Sponsorship, Partnerships, Experiential, PR, Event Management, Hospitality, and Strategy Locations Primary 90-100 Southwark Street London, England, GB Get directions Employees at Fuse Heidi Truman Matt Bailey Lucy Basden-Smith Martina Blahova See all employees Updates Fuse 37,315 followers 1w Report this post As we enter 2026, our CEO Louise Johnson reflects on why 2025 was a turning point for sports marketing, featured in City AM . From AI and personalisation shaping fan experiences, to women’s sport becoming central, and live events reclaiming their power, access the full article via the link in the comment section. #newyear #sports #sponsorship   21 1 Comment Like Comment Share Fuse 37,315 followers 2w Report this post 2025 has been a year of incredible milestones and shared achievements. We’re grateful to our clients, partners and Fusers for making it all possible. Here’s to new opportunities and continued success in 2026! Wishing you a joyful holiday season and a bright year ahead. 🎉 19 Like Comment Share Fuse 37,315 followers 2w Report this post Everything begins with an idea, and every part of the National Basketball Association (NBA) space at CCXP Brazil was designed to connect the world’s greatest basketball league into the pop culture scene. The project for Comic Con Experience was born from the VEM PRA QUADRA concept, created by Fuse Brazil, offering fans immersive activations, authentic experiences and unforgettable moments. 🏀 👏 Omnicom Media Brazil Luiz Fiorese …more 32 5 Comments Like Comment Share Fuse 37,315 followers 3w Report this post Fuse's Head of Strategy, Tom Wild , has written for City AM on why cultural crossover has become essential to sports marketing and entertainment. Tom argues that the Joshua vs Paul fight isn't an anomaly. It's evidence that sport can no longer be separated from the cultural forces around it. Modern fans don't live in silos; they follow individuals across sport, fashion, music and content creation. Read the full article here: https://lnkd.in/e5PvbtfY #boxing #culturallyconnected 6 Like Comment Share Fuse 37,315 followers 3w Edited Report this post Huge congratulations to our Global CEO, Louise Johnson , who has been named in Campaign UK 's List for Trailblazers for launching Fertility Futures Project . Just a month in, and she's already opening up vital conversations around fertility, infertility and reproductive health. Watching Louise drive meaningful change while leading Fuse is inspiring! #leadership #trailblazer 72 Like Comment Share Fuse 37,315 followers 1mo Report this post Yesterday, some of our Fusers took part in Christmas Jumper Day, raising funds for the incredible Fertility Futures Project , a charity launched by our Global CEO, Louise Johnson . Fertility Futures Project believes fertility is not just a personal issue; it’s a public one and its mission is to ensure the next generation is better informed, better supported, and better equipped to make decisions about their reproductive futures. Big thanks to everyone who brought colour and sparkle to the office and those that donated. 🫶   Learn more about Fertility Futures Project here:  https://lnkd.in/euDjceM5   42 4 Comments Like Comment Share Fuse 37,315 followers 1mo Report this post As the 2025 F1 season wrapped up in Abu Dhabi the past weekend, our CEO Louise Johnson 's piece on why Formula 1 has evolved into one of the most valuable global sports assets featured in print in City AM today. From Apple's $160m broadcast deal to brands like PepsiCo going all-in, she explores how F1 has become premium entertainment infrastructure with unmatched global reach. 🏁 Read the full piece here: https://lnkd.in/eVShz-9k #F1 48 Like Comment Share Fuse 37,315 followers 1mo Report this post Congratulations to our Fuse Brazil team on successfully managing Amstel’s sponsorships across the Copa Libertadores in their first season. The team connected nine countries, oversaw more than 350 matches, coordinated 30,000+ tickets, and delivered a seamless finale. A fantastic achievement from Luiz Fiorese and the team, great to see such strong work from our Latam region. Read more here…   Luiz Fiorese 1mo Nesse último fim de semana, em Lima, aconteceu a final da Copa Libertadores e com ela nossa primeira temporada trabalhando e gerenciando os patrocínios da Amstel Bier chega ao fim.   O desafio era motivador, a “bola já estava rolando” e capitaneados pela Vanessa Brandão e liderados pelo Valentin Kondrashkin trabalhamos duro e com muita estratégia e organização conectamos 9 países/culturas diferentes, em mais de 350 jogos, mais de 30.000 ingressos gerenciados, muitos playbooks, muito alinhamento, dedicação e com um “grand finale” que certamente vai ficar na memoria de todos os envolvidos   Tenho que ressaltar que trata-se projeto incrível e que nos enche de orgulho pois, em primeiro lugar, estamos falando de uma marca magnífica, que há anos esta inserida dentro da cultura esportiva e conectada com o maior produto esportivo das Américas. Mas, além disso, porque temos a alegria de replicar e aplicar aqui no Brasil o trabalho que nossos parceiros da Fuse fazem brilhantemente, em nosso headquarter, para tantas outras marcas que são referência dentro da UCL.   Obrigado, mais uma vez, Vanessa, Valentin e equipe Amstel por confiar na FUSE nesse projeto incrível. E obrigado time FUSE por essa entrega tão especial. Vamos em frente porque 2026 já começou... 27 3 Comments Like Comment Share Fuse 37,315 followers 1mo Report this post A glimpse into Chat_UP, the live Unofficial Partner Podcast at Fuse HQ, in partnership with Twenty First Group . Here’s a look into the evening as we explored AI’s impact on sports marketing. 👏 …more 20 Like Comment Share Fuse 37,315 followers 1mo Edited Report this post As Movember is coming to an end, we want to celebrate Fusers James Tredinnick , Thomas Murphy , James Kimber and Luke Bliss who have taken part in this important initiative. By growing moustaches throughout November, they’ve helped raise awareness and funds for the Movember Foundation, which supports programmes focused on men’s mental and physical health. 💙   The need for action is clear: 🖤 Suicide remains the leading cause of death for men under 50. 🩶 Only 36% of NHS talk therapy referrals are for men, highlighting the importance of reducing stigma and improving access to support.   If you'd like to support, there’s still time to donate here: https://lnkd.in/eK4PFpf6   #Movember #MensHealth #MentalHealthAwareness 28 Like Comment Share Join now to see what you are missing Find people you know at Fuse Browse recommended jobs for you View all updates, news, and articles Join now Affiliated pages Omnicom Media Group UK Advertising Services Similar pages Omnicom Media Group UK Advertising Services Wasserman Entertainment Providers Los Angeles, California Octagon Advertising Services Stamford, Connecticut CSM Sport & Entertainment Marketing Services IMG Entertainment Providers Omnicom Media Group Advertising Services Two Circles Marketing Services 50 Sport Spectator Sports London, England Creative Artists Agency Entertainment Providers Los Angeles, CA Right Formula Marketing Services London, England Show more similar pages Show fewer similar pages Browse jobs Marketing Manager jobs 14,693 open jobs Account Manager jobs 20,526 open jobs Event Manager jobs 5,539 open jobs Human Resources Advisor jobs 3,170 open jobs Freelance Event Manager jobs 70 open jobs Sponsorship Manager jobs 5,679 open jobs Account Director jobs 11,028 open jobs Account Executive jobs 10,541 open jobs Marketing Director jobs 7,678 open jobs Graduate Recruiter jobs 3,965 open jobs Digital Manager jobs 11,076 open jobs Senior Brand Manager jobs 5,267 open jobs Copywriter jobs 3,240 open jobs Marketing Assistant jobs 9,773 open jobs Director Economic jobs 1,137 open jobs National Account Executive jobs 6,360 open jobs Project Manager jobs 33,848 open jobs Analyst jobs 63,799 open jobs Sales Manager jobs 26,580 open jobs Director jobs 156,873 open jobs Show more jobs like this Show fewer jobs like this More searches More searches Account Manager jobs Event Manager jobs Marketing Manager jobs Account Director jobs Marketing Assistant jobs Account Executive jobs Digital Marketing Specialist jobs Head of Marketing jobs Sponsorship Manager jobs Engineer jobs Fuse jobs User Interface Designer jobs Product Designer jobs Art Director jobs User Experience Designer jobs Sustainability Consultant jobs Sales Account Manager jobs Underwriter jobs Director jobs Teacher jobs Vice President jobs User Experience Specialist jobs Publicity Director jobs Insights Manager jobs Outreach Assistant jobs Marketing Intern jobs Business Controller jobs 3D Modeler jobs Developer jobs Media Account Director jobs Advertisement Director jobs Broadcast Journalist jobs Sports Journalist jobs Artist jobs Media Manager jobs Marketing Executive jobs Creative Copywriter jobs Research Director jobs Chief Financial Officer jobs Media Director jobs Researcher jobs Analyst jobs Media Analyst jobs Controller jobs Director of Partnerships jobs Media Executive jobs Software Engineer jobs Revenue Analyst jobs Commercial Manager jobs Digital Marketing Manager jobs Marketing Specialist jobs Electrical Design Engineer jobs Automotive Engineer jobs Search Engine Optimization Manager jobs System Test Engineer jobs Video Editor jobs Business Development Specialist jobs Senior Copywriter jobs Communications Specialist jobs Art Editor jobs LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to see who you already know at Fuse Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy .
2026-01-13T09:29:20
http://www.trello.com/guide
Trello Guides: Help Getting Started With Trello | Trello Skip to main content Features Solutions Plans Pricing Resources Explore the features that help your team succeed Inbox Capture every vital detail from emails, Slack, and more directly into your Trello Inbox. Planner Sync your calendar and allocate focused time slots to boost productivity. Automation Automate tasks and workflows with Trello. Power-Ups Power up your teams by linking their favorite tools with Trello plugins. Templates Give your team a blueprint for success with easy-to-use templates from industry leaders and the Trello community. Integrations Find the apps your team is already using or discover new ways to get work done in Trello. Meet Trello Trello makes it easy for your team to get work done. No matter the project, workflow, or type of team, Trello can help keep things organized. It’s simple – sign-up, create a board, and you’re off! Productivity awaits. Check out Trello Take a page out of these pre-built Trello playbooks designed for all teams Marketing teams Whether launching a new product, campaign, or creating content, Trello helps marketing teams succeed. Product management Use Trello’s management boards and roadmap features to simplify complex projects and processes. Engineering teams Ship more code, faster, and give your developers the freedom to be more agile with Trello. Design teams Empower your design teams by using Trello to streamline creative requests and promote more fluid cross-team collaboration. Startups From hitting revenue goals to managing workflows, small businesses thrive with Trello. Remote teams Keep your remote team connected and motivated, no matter where they’re located around the world. See all teams Our product in action Use case: Task management Track progress of tasks in one convenient place with a visual layout that adds ‘ta-da’ to your to-do’s. Use case: Resource hub Save hours when you give teams a well-designed hub to find information easily and quickly. Use case: Project management Keep projects organized, deadlines on track, and teammates aligned with Trello. See all use cases Standard For teams that need to manage more work and scale collaboration. Premium Best for teams up to 100 that need to track multiple projects and visualize work in a variety of ways. Enterprise Everything your enterprise teams and admins need to manage projects. Free plan For individuals or small teams looking to keep work organized. Take a tour of Trello Compare plans & pricing Whether you’re a team of 2 or 2,000, Trello’s flexible pricing model means you only pay for what you need. View Trello pricing Learn & connect Trello guide Our easy to follow workflow guide will take you from project set-up to Trello expert in no time. Remote work guide The complete guide to setting up your team for remote work success. Webinars Enjoy our free Trello webinars and become a productivity professional. Customer stories See how businesses have adopted Trello as a vital part of their workflow. Developers The sky's the limit in what you can deliver to Trello users in your Power-Up! Help resources Need help? Articles and FAQs to get you unstuck. Back Navigation Features Solutions Plans Pricing Resources Explore the features that help your team succeed Inbox Capture every vital detail from emails, Slack, and more directly into your Trello Inbox. Planner Sync your calendar and allocate focused time slots to boost productivity. Automation Automate tasks and workflows with Trello. Power-Ups Power up your teams by linking their favorite tools with Trello plugins. Templates Give your team a blueprint for success with easy-to-use templates from industry leaders and the Trello community. Integrations Find the apps your team is already using or discover new ways to get work done in Trello. Meet Trello Trello makes it easy for your team to get work done. No matter the project, workflow, or type of team, Trello can help keep things organized. It’s simple – sign-up, create a board, and you’re off! Productivity awaits. Check out Trello Take a page out of these pre-built Trello playbooks designed for all teams Marketing teams Whether launching a new product, campaign, or creating content, Trello helps marketing teams succeed. Product management Use Trello’s management boards and roadmap features to simplify complex projects and processes. Engineering teams Ship more code, faster, and give your developers the freedom to be more agile with Trello. Design teams Empower your design teams by using Trello to streamline creative requests and promote more fluid cross-team collaboration. Startups From hitting revenue goals to managing workflows, small businesses thrive with Trello. Remote teams Keep your remote team connected and motivated, no matter where they’re located around the world. See all teams Our product in action Read though our use cases to make the most of Trello on your team. See all use cases Standard For teams that need to manage more work and scale collaboration. Premium Best for teams up to 100 that need to track multiple projects and visualize work in a variety of ways. Enterprise Everything your enterprise teams and admins need to manage projects. Free plan For individuals or small teams looking to keep work organized. Take a tour of Trello Compare plans & pricing Whether you’re a team of 2 or 2,000, Trello’s flexible pricing model means you only pay for what you need. View Trello pricing Learn & connect Trello guide Our easy to follow workflow guide will take you from project set-up to Trello expert in no time. Remote work guide The complete guide to setting up your team for remote work success. Webinars Enjoy our free Trello webinars and become a productivity professional. Customer stories See how businesses have adopted Trello as a vital part of their workflow. Developers The sky's the limit in what you can deliver to Trello users in your Power-Up! Help resources Need help? Articles and FAQs to get you unstuck. Helping teams work better, together Discover Trello use cases, productivity tips, best practices for team collaboration, and expert remote work advice. Check out the Trello blog Getting started with Trello Getting started with Trello Learn Trello board basics Create your first project Onboard your team to Trello Integrate Trello with other apps Powerful collaboration features Activate different views Automate anything in Trello Set permissions and admin controls Learn Trello’s top tips and tricks Getting started with Trello Welcome to Trello! This guide will walk you through everything you need to know about using Trello, from setting up your first project to equipping your team with all of the tools they need to get the job done. Each chapter includes easy to follow steps, tips, and templates that will turn you into a Trello champion in no time. Go to the guide Be a Trello expert in 9 easy steps CHAPTER 1 Learn Trello board basics CHAPTER 2 Create your first project CHAPTER 3 Onboard your team to Trello CHAPTER 4 Integrate Trello with other apps CHAPTER 5 Powerful collaboration features NEW! CHAPTER 6 Activate different views CHAPTER 7 Automate anything in Trello CHAPTER 8 Set permissions and admin controls CHAPTER 9 Learn Trello’s top tips and tricks How to embrace remote work The complete guide to setting up your team for remote work success Read the guide Try Premium free for 14 days See your work in a whole new way with Trello views. Try it today Join over 2,000,000 teams worldwide who are using Trello to get more done. Start with a template Give your team a blueprint for success with Trello templates: copy, customize, and you’ll be collaborating in no time! Project Management Business Sales Design Engineering Marketing Go to template gallery Next chapter Log In About Trello What’s behind the boards. Jobs Learn about open roles on the Trello team. Apps Download the Trello App for your Desktop or Mobile devices. Contact us Need anything? Get in touch and we can help. Čeština Deutsch English Español Français Italiano Magyar Nederlands Norsk (bokmål) Polski Português (Brasil) Suomi Svenska Tiếng Việt Türkçe Русский Українська ภาษาไทย 中文 (简体) 中文 (繁體) 日本語 Notice at Collection Privacy Policy Terms Copyright © 2024 Atlassian
2026-01-13T09:29:20
https://w3techs.com/technologies/overview/image_format
Usage Statistics of Image File Formats for Websites, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends History Market Top Site Usage Market Position Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > Image File Formats Usage statistics of image file formats for websites This diagram shows the percentages of websites using various image file formats. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. How to read the diagram: 3.4% of the websites use none of the image file formats that we monitor. PNG is used by 78.1% of all the websites. None 3.4% PNG 78.1% JPEG 73.0% SVG 64.9% WebP 18.8% GIF 15.8% AVIF 1.2% ICO 0.1% BMP 0.1% W3Techs.com, 13 January 2026 Percentages of websites using various image file formats Note: a website may use more than one image file format The following image file formats have a market share of less than 0.1% TIFF APNG JPEG XL Is there a technology missing? Registered users can make a proposal to add a technology. Do you want to stay informed about this survey? Use our monthly technology survey RSS Feed . Registered users can also subscribe to a monthly technology survey email. Share this page Technology Brief Image File Formats Image file formats are different ways to store images on a computer, for instance on a web server. Latest related forum entry   read all Technology proposal: JPEG XL 2 September 2024 » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://sre.google/sre-book/automation-at-google#id-bb2ugF2FVsQ
Google SRE - Google Automation For Reliability Chapter 7 - The Evolution of Automation at Google Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolution of Automation at Google Written by Niall Murphy with John Looney and Michael Kacirek Edited by Betsy Beyer Besides black art, there is only automation and mechanization. Federico García Lorca (1898–1936), Spanish poet and playwright For SRE, automation is a force multiplier, not a panacea. Of course, just multiplying force does not naturally change the accuracy of where that force is applied: doing automation thoughtlessly can create as many problems as it solves. Therefore, while we believe that software-based automation is superior to manual operation in most circumstances, better than either option is a higher-level system design requiring neither of them—an autonomous system. Or to put it another way, the value of automation comes from both what it does and its judicious application. We’ll discuss both the value of automation and how our attitude has evolved over time. The Value of Automation What exactly is the value of automation? 26 Consistency Although scale is an obvious motivation for automation, there are many other reasons to use it. Take the example of university computing systems, where many systems engineering folks started their careers. Systems administrators of that background were generally charged with running a collection of machines or some software, and were accustomed to manually performing various actions in the discharge of that duty. One common example is creating user accounts; others include purely operational duties like making sure backups happen, managing server failover, and small data manipulations like changing the upstream DNS servers’ resolv.conf , DNS server zone data, and similar activities. Ultimately, however, this prevalence of manual tasks is unsatisfactory for both the organizations and indeed the people maintaining systems in this way. For a start, any action performed by a human or humans hundreds of times won’t be performed the same way each time: even with the best will in the world, very few of us will ever be as consistent as a machine. This inevitable lack of consistency leads to mistakes, oversights, issues with data quality, and, yes, reliability problems. In this domain—the execution of well-scoped, known procedures the value of consistency is in many ways the primary value of automation. A Platform Automation doesn’t just provide consistency. Designed and done properly, automatic systems also provide a platform that can be extended, applied to more systems, or perhaps even spun out for profit. 27 (The alternative, no automation, is neither cost effective nor extensible: it is instead a tax levied on the operation of a system.) A platform also centralizes mistakes . In other words, a bug fixed in the code will be fixed there once and forever, unlike a sufficiently large set of humans performing the same procedure, as discussed previously. A platform can be extended to perform additional tasks more easily than humans can be instructed to perform them (or sometimes even realize that they have to be done). Depending on the nature of the task, it can run either continuously or much more frequently than humans could appropriately accomplish the task, or at times that are inconvenient for humans. Furthermore, a platform can export metrics about its performance, or otherwise allow you to discover details about your process you didn’t know previously, because these details are more easily measurable within the context of a platform. Faster Repairs There’s an additional benefit for systems where automation is used to resolve common faults in a system (a frequent situation for SRE-created automation). If automation runs regularly and successfully enough, the result is a reduced mean time to repair (MTTR) for those common faults. You can then spend your time on other tasks instead, thereby achieving increased developer velocity because you don’t have to spend time either preventing a problem or (more commonly) cleaning up after it. As is well understood in the industry, the later in the product lifecycle a problem is discovered, the more expensive it is to fix; see Testing for Reliability . Generally, problems that occur in actual production are most expensive to fix, both in terms of time and money, which means that an automated system looking for problems as soon as they arise has a good chance of lowering the total cost of the system, given that the system is sufficiently large. Faster Action In the infrastructural situations where SRE automation tends to be deployed, humans don’t usually react as fast as machines. In most common cases, where, for example, failover or traffic switching can be well defined for a particular application, it makes no sense to effectively require a human to intermittently press a button called “Allow system to continue to run.” (Yes, it is true that sometimes automatic procedures can end up making a bad situation worse, but that is why such procedures should be scoped over well-defined domains.) Google has a large amount of automation; in many cases, the services we support could not long survive without this automation because they crossed the threshold of manageable manual operation long ago. Time Saving Finally, time saving is an oft-quoted rationale for automation. Although people cite this rationale for automation more than the others, in many ways the benefit is often less immediately calculable. Engineers often waver over whether a particular piece of automation or code is worth writing, in terms of effort saved in not requiring a task to be performed manually versus the effort required to write it. 28 It’s easy to overlook the fact that once you have encapsulated some task in automation, anyone can execute the task. Therefore, the time savings apply across anyone who would plausibly use the automation. Decoupling operator from operation is very powerful. Joseph Bironas, an SRE who led Google’s datacenter turnup efforts for a time, forcefully argued: "If we are engineering processes and solutions that are not automatable, we continue having to staff humans to maintain the system. If we have to staff humans to do the work, we are feeding the machines with the blood, sweat, and tears of human beings. Think The Matrix with less special effects and more pissed off System Administrators." The Value for Google SRE All of these benefits and trade-offs apply to us just as much as anyone else, and Google does have a strong bias toward automation. Part of our preference for automation springs from our particular business challenges: the products and services we look after are planet-spanning in scale, and we don’t typically have time to engage in the same kind of machine or service hand-holding common in other organizations. 29 For truly large services, the factors of consistency, quickness, and reliability dominate most conversations about the trade-offs of performing automation. Another argument in favor of automation, particularly in the case of Google, is our complicated yet surprisingly uniform production environment, described in The Production Environment at Google, from the Viewpoint of an SRE . While other organizations might have an important piece of equipment without a readily accessible API, software for which no source code is available, or another impediment to complete control over production operations, Google generally avoids such scenarios. We have built APIs for systems when no API was available from the vendor. Even though purchasing software for a particular task would have been much cheaper in the short term, we chose to write our own solutions, because doing so produced APIs with the potential for much greater long-term benefits. We spent a lot of time overcoming obstacles to automatic system management, and then resolutely developed that automatic system management itself. Given how Google manages its source code [Pot16] , the availability of that code for more or less any system that SRE touches also means that our mission to “own the product in production” is much easier because we control the entirety of the stack. Of course, although Google is ideologically bent upon using machines to manage machines where possible, reality requires some modification of our approach. It isn’t appropriate to automate every component of every system, and not everyone has the ability or inclination to develop automation at a particular time. Some essential systems started out as quick prototypes, not designed to last or to interface with automation. The previous paragraphs state a maximalist view of our position, but one that we have been broadly successful at putting into action within the Google context. In general, we have chosen to create platforms where we could, or to position ourselves so that we could create platforms over time. We view this platform-based approach as necessary for manageability and scalability. The Use Cases for Automation In the industry, automation is the term generally used for writing code to solve a wide variety of problems, although the motivations for writing this code, and the solutions themselves, are often quite different. More broadly, in this view, automation is “meta-software”—software to act on software. As we implied earlier, there are a number of use cases for automation. Here is a non-exhaustive list of examples: User account creation Cluster turnup and turndown for services Software or hardware installation preparation and decommissioning Rollouts of new software versions Runtime configuration changes A special case of runtime config changes: changes to your dependencies This list could continue essentially ad infinitum . Google SRE’s Use Cases for Automation In Google, we have all of the use cases just listed, and more. However, within Google SRE, our primary affinity has typically been for running infrastructure, as opposed to managing the quality of the data that passes over that infrastructure. This line isn’t totally clear—for example, we care deeply if half of a dataset vanishes after a push, and therefore we alert on coarse-grain differences like this, but it’s rare for us to write the equivalent of changing the properties of some arbitrary subset of accounts on a system. Therefore, the context for our automation is often automation to manage the lifecycle of systems, not their data: for example, deployments of a service in a new cluster. To this extent, SRE’s automation efforts are not far off what many other people and organizations do, except that we use different tools to manage it and have a different focus (as we’ll discuss). Widely available tools like Puppet, Chef, cfengine, and even Perl, which all provide ways to automate particular tasks, differ mostly in terms of the level of abstraction of the components provided to help the act of automating. A full language like Perl provides POSIX-level affordances, which in theory provide an essentially unlimited scope of automation across the APIs accessible to the system, 30 whereas Chef and Puppet provide out-of-the-box abstractions with which services or other higher-level entities can be manipulated. The trade-off here is classic: higher-level abstractions are easier to manage and reason about, but when you encounter a “leaky abstraction,” you fail systemically, repeatedly, and potentially inconsistently. For example, we often assume that pushing a new binary to a cluster is atomic; the cluster will either end up with the old version, or the new version. However, real-world behavior is more complicated: that cluster’s network can fail halfway through; machines can fail; communication to the cluster management layer can fail, leaving the system in an inconsistent state; depending on the situation, new binaries could be staged but not pushed, or pushed but not restarted, or restarted but not verifiable. Very few abstractions model these kinds of outcomes successfully, and most generally end up halting themselves and calling for intervention. Truly bad automation systems don’t even do that. SRE has a number of philosophies and products in the domain of automation, some of which look more like generic rollout tools without particularly detailed modeling of higher-level entities, and some of which look more like languages for describing service deployment (and so on) at a very abstract level. Work done in the latter tends to be more reusable and be more of a common platform than the former, but the complexity of our production environment sometimes means that the former approach is the most immediately tractable option. A Hierarchy of Automation Classes Although all of these automation steps are valuable, and indeed an automation platform is valuable in and of itself, in an ideal world, we wouldn’t need externalized automation. In fact, instead of having a system that has to have external glue logic, it would be even better to have a system that needs no glue logic at all , not just because internalization is more efficient (although such efficiency is useful), but because it has been designed to not need glue logic in the first place. Accomplishing that involves taking the use cases for glue logic—generally “first order” manipulations of a system, such as adding accounts or performing system turnup—and finding a way to handle those use cases directly within the application. As a more detailed example, most turnup automation at Google is problematic because it ends up being maintained separately from the core system and therefore suffers from “bit rot,” i.e., not changing when the underlying systems change. Despite the best of intentions, attempting to more tightly couple the two (turnup automation and the core system) often fails due to unaligned priorities, as product developers will, not unreasonably, resist a test deployment requirement for every change. Secondly, automation that is crucial but only executed at infrequent intervals and therefore difficult to test is often particularly fragile because of the extended feedback cycle. Cluster failover is one classic example of infrequently executed automation: failovers might only occur every few months, or infrequently enough that inconsistencies between instances are introduced. The evolution of automation follows a path: 1) No automation Database master is failed over manually between locations. 2) Externally maintained system-specific automation An SRE has a failover script in his or her home directory. 3) Externally maintained generic automation The SRE adds database support to a "generic failover" script that everyone uses. 4) Internally maintained system-specific automation The database ships with its own failover script. 5) Systems that don’t need any automation The database notices problems, and automatically fails over without human intervention. SRE hates manual operations, so we obviously try to create systems that don’t require them. However, sometimes manual operations are unavoidable. There is additionally a subvariety of automation that applies changes not across the domain of specific system-related configuration, but across the domain of production as a whole. In a highly centralized proprietary production environment like Google’s, there are a large number of changes that have a non–service-specific scope—e.g., changing upstream Chubby servers, a flag change to the Bigtable client library to make access more reliable, and so on—which nonetheless need to be safely managed and rolled back if necessary. Beyond a certain volume of changes, it is infeasible for production-wide changes to be accomplished manually, and at some time before that point, it’s a waste to have manual oversight for a process where a large proportion of the changes are either trivial or accomplished successfully by basic relaunch-and-check strategies. Let’s use internal case studies to illustrate some of the preceding points in detail. The first case study is about how, due to some diligent, far-sighted work, we managed to achieve the self-professed nirvana of SRE: to automate ourselves out of a job. Automate Yourself Out of a Job: Automate ALL the Things! For a long while, the Ads products at Google stored their data in a MySQL database. Because Ads data obviously has high reliability requirements, an SRE team was charged with looking after that infrastructure. From 2005 to 2008, the Ads Database mostly ran in what we considered to be a mature and managed state. For example, we had automated away the worst, but not all, of the routine work for standard replica replacements. We believed the Ads Database was well managed and that we had harvested most of the low-hanging fruit in terms of optimization and scale. However, as daily operations became comfortable, team members began to look at the next level of system development: migrating MySQL onto Google’s cluster scheduling system, Borg. We hoped this migration would provide two main benefits: Completely eliminate machine/replica maintenance: Borg would automatically handle the setup/restart of new and broken tasks. Enable bin-packing of multiple MySQL instances on the same physical machine: Borg would enable more efficient use of machine resources via Containers. In late 2008, we successfully deployed a proof of concept MySQL instance on Borg. Unfortunately, this was accompanied by a significant new difficulty. A core operating characteristic of Borg is that its tasks move around automatically. Tasks commonly move within Borg as frequently as once or twice per week. This frequency was tolerable for our database replicas, but unacceptable for our masters. At that time, the process for master failover took 30–90 minutes per instance. Simply because we ran on shared machines and were subject to reboots for kernel upgrades, in addition to the normal rate of machine failure, we had to expect a number of otherwise unrelated failovers every week. This factor, in combination with the number of shards on which our system was hosted, meant that: Manual failovers would consume a substantial amount of human hours and would give us best-case availability of 99% uptime, which fell short of the actual business requirements of the product. In order to meet our error budgets, each failover would have to take less than 30 seconds of downtime. There was no way to optimize a human-dependent procedure to make downtime shorter than 30 seconds. Therefore, our only choice was to automate failover. Actually, we needed to automate more than just failover. In 2009 Ads SRE completed our automated failover daemon, which we dubbed “Decider.” Decider could complete MySQL failovers for both planned and unplanned failovers in less than 30 seconds 95% of the time. With the creation of Decider, MySQL on Borg (MoB) finally became a reality. We graduated from optimizing our infrastructure for a lack of failover to embracing the idea that failure is inevitable, and therefore optimizing to recover quickly through automation. While automation let us achieve highly available MySQL in a world that forced up to two restarts per week, it did come with its own set of costs. All of our applications had to be changed to include significantly more failure-handling logic than before. Given that the norm in the MySQL development world is to assume that the MySQL instance will be the most stable component in the stack, this switch meant customizing software like JDBC to be more tolerant of our failure-prone environment. However, the benefits of migrating to MoB with Decider were well worth these costs. Once on MoB, the time our team spent on mundane operational tasks dropped by 95%. Our failovers were automated, so an outage of a single database task no longer paged a human. The main upshot of this new automation was that we had a lot more free time to spend on improving other parts of the infrastructure. Such improvements had a cascading effect: the more time we saved, the more time we were able to spend on optimizing and automating other tedious work. Eventually, we were able to automate schema changes, causing the cost of total operational maintenance of the Ads Database to drop by nearly 95%. Some might say that we had successfully automated ourselves out of this job. The hardware side of our domain also saw improvement. Migrating to MoB freed up considerable resources because we could schedule multiple MySQL instances on the same machines, which improved utilization of our hardware. In total, we were able to free up about 60% of our hardware. Our team was now flush with hardware and engineering resources. This example demonstrates the wisdom of going the extra mile to deliver a platform rather than replacing existing manual procedures. The next example comes from the cluster infrastructure group, and illustrates some of the more difficult trade-offs you might encounter on your way to automating all the things. Soothing the Pain: Applying Automation to Cluster Turnups Ten years ago, the Cluster Infrastructure SRE team seemed to get a new hire every few months. As it turned out, that was approximately the same frequency at which we turned up a new cluster. Because turning up a service in a new cluster gives new hires exposure to a service’s internals, this task seemed like a natural and useful training tool. The steps taken to get a cluster ready for use were something like the following: Fit out a datacenter building for power and cooling. Install and configure core switches and connections to the backbone. Install a few initial racks of servers. Configure basic services such as DNS and installers, then configure a lock service, storage, and computing. Deploy the remaining racks of machines. Assign user-facing services resources, so their teams can set up the services. Steps 4 and 6 were extremely complex. While basic services like DNS are relatively simple, the storage and compute subsystems at that time were still in heavy development, so new flags, components, and optimizations were added weekly. Some services had more than a hundred different component subsystems, each with a complex web of dependencies. Failing to configure one subsystem, or configuring a system or component differently than other deployments, is a customer-impacting outage waiting to happen. In one case, a multi-petabyte Bigtable cluster was configured to not use the first (logging) disk on 12-disk systems, for latency reasons. A year later, some automation assumed that if a machine’s first disk wasn’t being used, that machine didn’t have any storage configured; therefore, it was safe to wipe the machine and set it up from scratch. All of the Bigtable data was wiped, instantly. Thankfully we had multiple real-time replicas of the dataset, but such surprises are unwelcome. Automation needs to be careful about relying on implicit "safety" signals. Early automation focused on accelerating cluster delivery. This approach tended to rely upon creative use of SSH for tedious package distribution and service initialization problems. This strategy was an initial win, but those free-form scripts became a cholesterol of technical debt. Detecting Inconsistencies with Prodtest As the numbers of clusters grew, some clusters required hand-tuned flags and settings. As a result, teams wasted more and more time chasing down difficult-to-spot misconfigurations. If a flag that made GFS more responsive to log processing leaked into the default templates, cells with many files could run out of memory under load. Infuriating and time-consuming misconfigurations crept in with nearly every large configuration change. The creative—though brittle—shell scripts we used to configure clusters were neither scaling to the number of people who wanted to make changes nor to the sheer number of cluster permutations that needed to be built. These shell scripts also failed to resolve more significant concerns before declaring that a service was good to take customer-facing traffic, such as: Were all of the service’s dependencies available and correctly configured? Were all configurations and packages consistent with other deployments? Could the team confirm that every configuration exception was desired? Prodtest (Production Test) was an ingenious solution to these unwelcome surprises. We extended the Python unit test framework to allow for unit testing of real-world services. These unit tests have dependencies, allowing a chain of tests, and a failure in one test would quickly abort. Take the test shown in Figure 7-1 as an example. Figure 7-1. ProdTest for DNS Service, showing how one failed test aborts the subsequent chain of tests A given team’s Prodtest was given the cluster name, and it could validate that team’s services in that cluster. Later additions allowed us to generate a graph of the unit tests and their states. This functionality allowed an engineer to see quickly if their service was correctly configured in all clusters, and if not, why. The graph highlighted the failed step, and the failing Python unit test output a more verbose error message. Any time a team encountered a delay due to another team’s unexpected misconfiguration, a bug could be filed to extend their Prodtest. This ensured that a similar problem would be discovered earlier in the future. SREs were proud to be able to assure their customers that all services—both newly turned up services and existing services with new configuration—would reliably serve production traffic. For the first time, our project managers could predict when a cluster could "go live," and had a complete understanding of why each clusters took six or more weeks to go from "network-ready" to "serving live traffic." Out of the blue, SRE received a mission from senior management: In three months, five new clusters will reach network-ready on the same day. Please turn them up in one week. Resolving Inconsistencies Idempotently A "One Week Turnup" was a terrifying mission. We had tens of thousands of lines of shell script owned by dozens of teams. We could quickly tell how unprepared any given cluster was, but fixing it meant that the dozens of teams would have to file hundreds of bugs, and then we had to hope that these bugs would be promptly fixed. We realized that evolving from "Python unit tests finding misconfigurations" to "Python code fixing misconfigurations" could enable us to fix these issues faster. The unit test already knew which cluster we were examining and the specific test that was failing, so we paired each test with a fix. If each fix was written to be idempotent, and could assume that all dependencies were met, resolving the problem should have been easy—and safe—to resolve. Requiring idempotent fixes meant teams could run their "fix script" every 15 minutes without fearing damage to the cluster’s configuration. If the DNS team’s test was blocked on the Machine Database team’s configuration of a new cluster, as soon as the cluster appeared in the database, the DNS team’s tests and fixes would start working. Take the test shown in Figure 7-2 as an example. If TestDnsMonitoringConfigExists fails, as shown, we can call FixDnsMonitoringCreateConfig , which scrapes configuration from a database, then checks a skeleton configuration file into our revision control system. Then TestDnsMonitoringConfigExists passes on retry, and the TestDnsMonitoringConfigPushed test can be attempted. If the test fails, the FixDnsMonitoringPushConfig step runs. If a fix fails multiple times, the automation assumes that the fix failed and stops, notifying the user. Armed with these scripts, a small group of engineers could ensure that we could go from "The network works, and machines are listed in the database" to "Serving 1% of websearch and ads traffic" in a matter of a week or two. At the time, this seemed to be the apex of automation technology. Looking back, this approach was deeply flawed; the latency between the test, the fix, and then a second test introduced flaky tests that sometimes worked and sometimes failed. Not all fixes were naturally idempotent, so a flaky test that was followed by a fix might render the system in an inconsistent state. Figure 7-2. ProdTest for DNS Service, showing that one failed test resulted in only running one fix The Inclination to Specialize Automation processes can vary in three respects: Competence , i.e., their accuracy Latency , how quickly all steps are executed when initiated Relevance , or proportion of real-world process covered by automation We began with a process that was highly competent (maintained and run by the service owners), high-latency (the service owners performed the process in their spare time or assigned it to new engineers), and very relevant (the service owners knew when the real world changed, and could fix the automation). To reduce turnup latency, many service owning teams instructed a single "turnup team" what automation to run. The turnup team used tickets to start each stage in the turnup so that we could track the remaining tasks, and who those tasks were assigned to. If the human interactions regarding automation modules occurred between people in the same room, cluster turnups could happen in a much shorter time. Finally, we had our competent, accurate, and timely automation process! But this state didn’t last long. The real world is chaotic: software, configuration, data, etc. changed, resulting in over a thousand separate changes a day to affected systems. The people most affected by automation bugs were no longer domain experts, so the automation became less relevant (meaning that new steps were missed) and less competent (new flags might have caused automation to fail). However, it took a while for this drop in quality to impact velocity. Automation code, like unit test code, dies when the maintaining team isn’t obsessive about keeping the code in sync with the codebase it covers. The world changes around the code: the DNS team adds new configuration options, the storage team changes their package names, and the networking team needs to support new devices. By relieving teams who ran services of the responsibility to maintain and run their automation code, we created ugly organizational incentives: A team whose primary task is to speed up the current turnup has no incentive to reduce the technical debt of the service-owning team running the service in production later. A team not running automation has no incentive to build systems that are easy to automate. A product manager whose schedule is not affected by low-quality automation will always prioritize new features over simplicity and automation. The most functional tools are usually written by those who use them. A similar argument applies to why product development teams benefit from keeping at least some operational awareness of their systems in production. Turnups were again high-latency, inaccurate, and incompetent—the worst of all worlds. However, an unrelated security mandate allowed us out of this trap. Much of distributed automation relied at that time on SSH. This is clumsy from a security perspective, because people must have root on many machines to run most commands. A growing awareness of advanced, persistent security threats drove us to reduce the privileges SREs enjoyed to the absolute minimum they needed to do their jobs. We had to replace our use of sshd with an authenticated, ACL-driven, RPC-based Local Admin Daemon, also known as Admin Servers, which had permissions to perform those local changes. As a result, no one could install or modify a server without an audit trail. Changes to the Local Admin Daemon and the Package Repo were gated on code reviews, making it very difficult for someone to exceed their authority; giving someone the access to install packages would not let them view colocated logs. The Admin Server logged the RPC requestor, any parameters, and the results of all RPCs to enhance debugging and security audits. Service-Oriented Cluster-Turnup In the next iteration, Admin Servers became part of service teams’ workflows, both as related to the machine-specific Admin Servers (for installing packages and rebooting) and cluster-level Admin Servers (for actions like draining or turning up a service). SREs moved from writing shell scripts in their home directories to building peer-reviewed RPC servers with fine-grained ACLs. Later on, after the realization that turnup processes had to be owned by the teams that owned the services fully sank in, we saw this as a way to approach cluster turnup as a Service-Oriented Architecture (SOA) problem: service owners would be responsible for creating an Admin Server to handle cluster turnup/turndown RPCs, sent by the system that knew when clusters were ready. In turn, each team would provide the contract (API) that the turnup automation needed, while still being free to change the underlying implementation. As a cluster reached "network-ready," automation sent an RPC to each Admin Server that played a part in turning up the cluster. We now have a low-latency, competent, and accurate process; most importantly, this process has stayed strong as the rate of change, the number of teams, and the number of services seem to double each year. As mentioned earlier, our evolution of turnup automation followed a path: Operator-triggered manual action (no automation) Operator-written, system-specific automation Externally maintained generic automation Internally maintained, system-specific automation Autonomous systems that need no human intervention While this evolution has, broadly speaking, been a success, the Borg case study illustrates another way we have come to think of the problem of automation. Borg: Birth of the Warehouse-Scale Computer Another way to understand the development of our attitude toward automation, and when and where that automation is best deployed, is to consider the history of the development of our cluster management systems. 31 Like MySQL on Borg, which demonstrated the success of converting manual operations to automatic ones, and the cluster turnup process, which demonstrated the downside of not thinking carefully enough about where and how automation was implemented, developing cluster management also ended up demonstrating another lesson about how automation should be done. Like our previous two examples, something quite sophisticated was created as the eventual result of continuous evolution from simpler beginnings. Google’s clusters were initially deployed much like everyone else’s small networks of the time: racks of machines with specific purposes and heterogeneous configurations. Engineers would log in to some well-known “master” machine to perform administrative tasks; “golden” binaries and configuration lived on these masters. As we had only one colo provider, most naming logic implicitly assumed that location. As production grew, and we began to use multiple clusters, different domains (cluster names) entered the picture. It became necessary to have a file describing what each machine did, which grouped machines under some loose naming strategy. This descriptor file, in combination with the equivalent of a parallel SSH, allowed us to reboot (for example) all the search machines in one go. Around this time, it was common to get tickets like “search is done with machine x1 , crawl can have the machine now.” Automation development began. Initially automation consisted of simple Python scripts for operations such as the following: Service management: keeping services running (e.g., restarts after segfaults) Tracking what services were supposed to run on which machines Log message parsing: SSHing into each machine and looking for regexps Automation eventually mutated into a proper database that tracked machine state, and also incorporated more sophisticated monitoring tools. With the union set of the automation available, we could now automatically manage much of the lifecycle of machines: noticing when machines were broken, removing the services, sending them to repair, and restoring the configuration when they came back from repair. But to take a step back, this automation was useful yet profoundly limited, due to the fact that abstractions of the system were relentlessly tied to physical machines. We needed a new approach, hence Borg [Ver15] was born: a system that moved away from the relatively static host/port/job assignments of the previous world, toward treating a collection of machines as a managed sea of resources. Central to its success—and its conception—was the notion of turning cluster management into an entity for which API calls could be issued, to some central coordinator. This liberated extra dimensions of efficiency, flexibility, and reliability: unlike the previous model of machine “ownership,” Borg could allow machines to schedule, for example, batch and user-facing tasks on the same machine. This functionality ultimately resulted in continuous and automatic operating system upgrades with a very small amount of constant 32 effort—effort that does not scale with the total size of production deployments. Slight deviations in machine state are now automatically fixed; brokenness and lifecycle management are essentially no-ops for SRE at this point. Thousands of machines are born, die, and go into repairs daily with no SRE effort. To echo the words of Ben Treynor Sloss: by taking the approach that this was a software problem, the initial automation bought us enough time to turn cluster management into something autonomous, as opposed to automated. We achieved this goal by bringing ideas related to data distribution, APIs, hub-and-spoke architectures, and classic distributed system software development to bear upon the domain of infrastructure management. An interesting analogy is possible here: we can make a direct mapping between the single machine case and the development of cluster management abstractions. In this view, rescheduling on another machine looks a lot like a process moving from one CPU to another: of course, those compute resources happen to be at the other end of a network link, but to what extent does that actually matter? Thinking in these terms, rescheduling looks like an intrinsic feature of the system rather than something one would “automate”—humans couldn’t react fast enough anyway. Similarly in the case of cluster turnup: in this metaphor, cluster turnup is simply additional schedulable capacity, a bit like adding disk or RAM to a single computer. However, a single-node computer is not, in general, expected to continue operating when a large number of components fail. The global computer is—it must be self-repairing to operate once it grows past a certain size, due to the essentially statistically guaranteed large number of failures taking place every second. This implies that as we move systems up the hierarchy from manually triggered, to automatically triggered, to autonomous, some capacity for self-introspection is necessary to survive. Reliability Is the Fundamental Feature Of course, for effective troubleshooting, the details of internal operation that the introspection relies upon should also be exposed to the humans managing the overall system. Analogous discussions about the impact of automation in the noncomputer domain—for example, in airplane flight 33 or industrial applications—often point out the downside of highly effective automation: 34 human operators are progressively more relieved of useful direct contact with the system as the automation covers more and more daily activities over time. Inevitably, then, a situation arises in which the automation fails, and the humans are now unable to successfully operate the system. The fluidity of their reactions has been lost due to lack of practice, and their mental models of what the system should be doing no longer reflect the reality of what it is doing. 35 This situation arises more when the system is nonautonomous—i.e., where automation replaces manual actions, and the manual actions are presumed to be always performable and available just as they were before. Sadly, over time, this ultimately becomes false: those manual actions are not always performable because the functionality to permit them no longer exists. We, too, have experienced situations where automation has been actively harmful on a number of occasions—see Automation: Enabling Failure at Scale —but in Google’s experience, there are more systems for which automation or autonomous behavior are no longer optional extras. As you scale, this is of course the case, but there are still strong arguments for more autonomous behavior of systems irrespective of size. Reliability is the fundamental feature, and autonomous, resilient behavior is one useful way to get that. Recommendations You might read the examples in this chapter and decide that you need to be Google-scale before you have anything to do with automation whatsoever. This is untrue, for two reasons: automation provides more than just time saving, so it’s worth implementing in more cases than a simple time-expended versus time-saved calculation might suggest. But the approach with the highest leverage actually occurs in the design phase: shipping and iterating rapidly might allow you to implement functionality faster, yet rarely makes for a resilient system. Autonomous operation is difficult to convincingly retrofit to sufficiently large systems, but standard good practices in software engineering will help considerably: having decoupled subsystems, introducing APIs, minimizing side effects, and so on. Automation: Enabling Failure at Scale Google runs over a dozen of its own large datacenters, but we also depend on machines in many third-party colocation facilities (or "colos"). Our machines in these colos are used to terminate most incoming connections, or as a cache for our own Content Delivery Network, in order to lower end-user latency. At any point in time, a number of these racks are being installed or decommissioned; both of these processes are largely automated. One step during decommission involves overwriting the full content of the disk of all the machines in the rack, after which point an independent system verifies the successful erase. We call this process "Diskerase." Once upon a time, the automation in charge of decommissioning a particular rack failed, but only after the Diskerase step had completed successfully. Later, the decommission process was restarted from the beginning, to debug the failure. On that iteration, when trying to send the set of machines in the rack to Diskerase, the automation determined that the set of machines that still needed to be Diskerased was (correctly) empty. Unfortunately, the empty set was used as a special value, interpreted to mean "everything." This means the automation sent almost all the machines we have in all colos to Diskerase. Within minutes, the highly efficient Diskerase wiped the disks on all machines in our CDN, and the machines were no longer able to terminate connections from users (or do anything else useful). We were still able to serve all the users from our own datacenters, and after a few minutes the only effect visible externally was a slight increase in latency. As far as we could tell, very few users noticed the problem at all, thanks to good capacity planning (at least we got that right!). Meanwhile, we spent the better part of two days reinstalling the machines in the affected colo racks; then we spent the following weeks auditing and adding more sanity checks—including rate limiting—into our automation, and making our decommission workflow idempotent. 26 For readers who already feel they precisely understand the value of automation, skip ahead to The Value for Google SRE . However, note that our description contains some nuances that might be useful to keep in mind while reading the rest of the chapter. 27 The expertise acquired in building such automation is also valuable in itself; engineers both deeply understand the existing processes they have automated and can later automate novel processes more quickly. 28 See the following XKCD cartoon: https://xkcd.com/1205/ . 29 See, for example, https://blog.engineyard.com/2014/pets-vs-cattle . 30 Of course, not every system that needs to be managed actually provides callable APIs for management—forcing some tooling to use, e.g., CLI invocations or automated website clicks. 31 We have compressed and simplified this history to aid understanding. 32 As in a small, unchanging number. 33 See, e.g., https://en.wikipedia.org/wiki/Air_France_Flight_447 . 34 See, e.g., [Bai83] and [Sar97] . 35 This is yet another good reason for regular practice drills; see Disaster Role Playing . Previous Chapter 6 - Monitoring Distributed Systems Next Chapter 8 - Release Engineering Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:20
https://www.linkedin.com/login?session_redirect=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2Fcompany%2Ftencentglobal&fromSignIn=true&trk=organization_guest_nav-header-signin
LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional))
2026-01-13T09:29:20
https://www.linkedin.com/products/axway-amplify-managed-file-transfer/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_image-click
Axway Managed File Transfer | LinkedIn Skip to main content LinkedIn Axway in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Axway Managed File Transfer Managed File Transfer (MFT) Software by Axway See who's skilled in this Add as skill Learn more Report this product About Secure, reliable, and easy-to-manage solution for transferring data between people, partners, businesses, and applications. Manage and control how your organization sends and receives data so you can ensure it stays protected and meets compliance regulations such as HIPAA, PCI, DSS, and GDPR. This product is intended for Chief Information Officer Head of Operations Head of Supply Chain Management Head of Information Technology Head of Logistics Head of Business Operations Director Information Technology Infrastructure Director of Operations Media Products media viewer No more previous content Axway SecureTransport Interactive Demo Contact us right here 💬 https://www.axway.com/en/contact-us-mft Welcome to our interactive demo of SecureTransport. One of many solutions in the Axway MFT product line. This quick demo will take you through a number of scenarios, showing you how and where to configure SecureTransport to achieve efficient and secure file transfers for your organization. Axway's Approach to Secure File Transfer in Financial Services -- Meetesh Patel, MFT General Manager Meetesh Patel, MFT General Manager, discusses Axway's approach to secure file transfer in financial services. Daimler Truck | Using Axway MFT to operate critical flows globally Discover how Daimler Truck, a global leader in commercial vehicle manufacturing, has harnessed the power of Axway MFT to enhance its worldwide operations, enabling them to achieve unparalleled agility, reliability, and security in both B2B and B2C. Customer Spotlight: Alaska Airlines' journey with Axway MFT Watch the Alaska Airlines team as they share their journey with Axway and the pivotal role MFT plays in their business operations. In this video, you’ll hear firsthand how they’ve experienced improved reliability and enhanced security, ensuring seamless data transfers and bolstering their operational efficiency. No more next content Featured customers of Axway Managed File Transfer State of California Government Administration 146,301 followers Textron Aviation and Aerospace Component Manufacturing 180,015 followers Ciena Telecommunications 280,960 followers Groupe CAT Transportation, Logistics, Supply Chain and Storage 39,638 followers Dun & Bradstreet Information Services 288,356 followers Groupe AGRICA Insurance 15,432 followers Banco do Brasil Banking 1,981,305 followers Railinc Corp. IT Services and IT Consulting 7,097 followers Bosch Software Development 1,600,877 followers AG2R LA MONDIALE Insurance 95,160 followers Skipton Building Society Financial Services 33,578 followers Alaska Airlines Airlines and Aviation 322,367 followers Acerta Human Resources Services 33,616 followers Inmar Intelligence Software Development 41,915 followers Bpifrance Banking 441,486 followers Sopra Steria IT Services and IT Consulting 984,370 followers Cardinal Health Hospitals and Health Care 773,374 followers CommonSpirit Health Hospitals and Health Care 145,016 followers Serasa Experian Information Services 709,161 followers Bundesagentur für Arbeit Government Administration 184,821 followers B3 Financial Services 691,997 followers Show more Show less Similar products Progress MOVEit Progress MOVEit Managed File Transfer (MFT) Software Serv-U Managed File Transfer Server Serv-U Managed File Transfer Server Managed File Transfer (MFT) Software JSCAPE by Redwood JSCAPE by Redwood Managed File Transfer (MFT) Software Cerberus by Redwood Cerberus by Redwood Managed File Transfer (MFT) Software MLADU MLADU Managed File Transfer (MFT) Software dDataBox dDataBox Managed File Transfer (MFT) Software Sign in to see more Show more Show less Axway products Amplify API Management Platform Amplify API Management Platform API Management Software Axway Financial Accounting Hub Axway Financial Accounting Hub LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:20
https://www.linkedin.com/uas/login?session_redirect=%2Fproducts%2Fstaffbase%3FviewConnections%3Dtrue&trk=products_details_guest_face-pile-cta
LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional))
2026-01-13T09:29:20
https://www.chiark.greenend.org.uk/~sgtatham/putty/mirrors.html
PuTTY Web Site Mirrors PuTTY Web Site Mirrors Home | FAQ | Feedback | Licence | Updates | Mirrors | Keys | Links | Team Download: Stable · Snapshot | Docs | Privacy | Changes | Wishlist Here is a list of PuTTY web site mirrors. If you would like to set up another mirror, see the mirroring guidelines below. The official PuTTY home site, in case that isn't where you're reading this, is https://www.chiark.greenend.org.uk/~sgtatham/putty/ HTTP mirrors of the whole site: putty.md5.com.ar in Argentina . putty.daemon.am in Armenia . mirror.afoyi.com in Australia . putty.taslug.org.au in Australia . putty.mirrors.ilisys.com.au in Australia . putty.4any.org in Austria . putty.mirror.netmonic.com in Austria . bec.at in Austria . eu-by.org in Belarus . putty.spegulo.be in Belgium . putty.be in Belgium . putty.portalinux.org in Belgium . putty.scarlet.be in Belgium . putty.edpnet.net in Belgium . (unreachable?) putty.ehdbrasil.net in Brazil . putty.kinghost.net in Brazil . putty.zloba.ath.cx in Bulgaria . putty.paracoda.com in Canada . mirror.nucleardog.com in Canada . putty.mirror.codersnetwork.co.uk in Canada . gulus.usherbrooke.ca in Canada . cdot.senecac.on.ca in Canada . putty.nasice.org in Croatia . putty.sh.cvut.cz in the Czech Republic . putty.och.cz in the Czech Republic . putty.tanis.dk in Denmark . mirrors.dotsrc.org in Denmark . (formerly sunsite.dk) putty.cofman.dk in Denmark . putty.zone-h.org in Estonia . mirror.cedratnet.com in France . putty.wandis.com in France . putty.miroir-francais.fr in France . putty.fredprod.com in France . putty.cict.fr in France . putty.spijoprod.net in France . putty.linux4all.homelinux.net in Germany . putty.phpmail.de in Germany . putty.aalener-mirror.de in Germany . putty.bemirror.org in Germany . putty.visiolab.de in Germany . putty.linux-mirror.org in Germany . very-clever.com in Germany . putty.mirroarrr.de in Germany . putty.spiegelserver.org in Germany . putty.xedio.de in Germany . putty.triplemind.com in Germany . putty.freemirror.de in Germany . putty.rorrim.org in Germany . huygens.linux4geeks.de in Germany . putty.mirroring.de in Germany . putty.mirrors.php-homepage.de in Germany . putty.mirrorplus.org in Germany . putty.miscellaneousmirror.org in Germany . putty.huewebrothers.de in Germany . putty.obengelb.de in Germany . netmirror.org in Germany . mirror.nimsay-networks.com in Germany . mirrors.ee.teiath.gr in Greece . putty.in51.com in Hong Kong . putty.internet.bs in Hong Kong . putty.archive.hk in Hong Kong . putty.udstudio.hu in Hungary . putty.matrix.is in Iceland . putty.cbn.net.id in Indonesia . putty.oss-mirror.org in Ireland . heanet.ie in Ireland . active.co.il in Israel . putty.fagioli.biz in Italy . putty.stoic.jp in Japan . vine.stoic.jp in Japan . kaist.ac.kr in Korea . putty.garmtech.lv in Latvia . putty.say-problem.net in Latvia . hardcore.lt in Lithuania . putty.ion.lu in Luxembourg . putty.interunix.net in Malaysia . putty.leakage.org in Malaysia . putty.nedzone.nl in the Netherlands . putty.imtek.nl in the Netherlands . putty.jl-projects.com in the Netherlands . putty.osmirror.nl in the Netherlands . putty.mirror.jt.org in the Netherlands . putty.nedmirror.nl in the Netherlands . frankenhuizen.nl in the Netherlands . putty.fluoline.net in the Netherlands . putty.mirror.nextit.nl in the Netherlands . cuba.calyx.nl in the Netherlands . stuwww.uvt.nl in the Netherlands . putty.servicez.org in the Netherlands . putty.coolzero.info in the Netherlands . wigen.net in Norway . putty.fupp.net in Norway . putty.net.pl in Poland . piotrkosoft.net in Poland . pitow.wroc.pl in Poland . putty.dcc.fc.up.pt in Portugal . neacm.fe.up.pt in Portugal . mirrors.ptm.ro in Romania . over-net.ro in Romania . putty.n9.ru in Russia . putty.lxnt.info in Russia . www.putty.spb.ru in Russia putty.nigilist.ru in Russia . putty.lamer.sk in Slovakia . putty.fyxm.net in Slovakia . putty.paknet.org in Slovenia . mirrors.bevc.net in Slovenia . putty.nightlight.biz in Spain . ftp.acc.umu.se in Sweden . putty.tx.se in Sweden . kos.li in Switzerland . putty.thaismartnetwork.com in Thailand . putty.thaiweb.net in Thailand . putty.vargonen.net in Turkey . debian.phys.hacettepe.edu.tr in Turkey . www.bfteam.com in Turkey (unreachable?) putty.mirrors.org.ua in Ukraine . mirrors.xifos.net in the UK . mirror.thekeelecentre.com in the UK . putty.carbonstudios.co.uk in the UK . sourcekeg.co.uk in the UK . putty.mirror.facebook.com in the US . scriptycan.com in the US . mirrors.bbnx.net in the US . putty.leetnet.com in the US . ftp.wayne.edu in the US . putty.hoxt.com in the US . ayush.org in the US . putty.grephead.com in the US . putty.hostingzero.com in the US . putty.cs.utah.edu in the US . putty.jwenet.net in the US . mirror.zonekeep.com in the US . mirrormonster.com in the US . putty.rtin.bz in the US . putty.omnitech.net in the US . silvertree.org in the US . mirrors.unix-boy.com in the US . puttymirror.rowehost.com in the US . tprinteractive.net in the US . putty.nobandwidth.net in the US . putty.mirrors.redwire.net in the US . diis.net in the US . mirrors.omnicomp.org in the US . puttyssh.org in the US . Be aware that the mirrors are not updated instantly. FTP mirrors of the PuTTY releases: ftp.wiretapped.net in Australia. ftp.samurai.com in Canada. cdot.senecac.on.ca in Canada. miroir-francais.fr in France. neutron.blogeek.org in France. netmirror.org in Germany. ftp.uni-oldenburg.de in Germany. (unreachable?) totem.fix.no in Norway. piotrkosoft.net in Poland. ftp.man.szczecin.pl in Poland. ftp.ds5.agh.edu.pl in Poland. ftp.mipt.ru in Russia. putty.cs.utah.edu in the US. putty.dudcore.net in the US. diis.net in the US. b-o-b.homelinux.com in the US. HTTP mirrors of the PuTTY development snapshots: kaizo.org in the UK. FTP mirrors of the PuTTY development snapshots: ftp.man.szczecin.pl in Poland. kaizo.org in the UK. Mirroring guidelines If you want to set up a mirror of the PuTTY website, go ahead and set one up. Please don't bother asking us for permission before setting up a mirror. You already have permission. If the mirror is in a country where we don't already have plenty of mirrors, we may be willing to add it to the list on this page. Read the guidelines below, make sure your mirror works, and email us the information listed at the bottom of the page. NOTE : We do not promise to list your mirror, or anyone's. We get a lot of mirror notifications, and yours may not happen to find its way to the top of the list. NOTE also: as of 2007-12-20, we link to all our mirror sites using the rel="nofollow" attribute. Running a PuTTY mirror is not intended to be a cheap way to gain search rankings. The preferred (and simplest) way to mirror the PuTTY website is to use rsync . We provide a version of the website content intended for use as a standalone mirror, at rsync://rsync.chiark.greenend.org.uk/ftp/users/sgtatham/putty-website-mirror . So you could set up a mirror by running a cron job which issued a command something like this every day: rsync -auH rsync://rsync.chiark.greenend.org.uk/ftp/users/sgtatham/putty-website-mirror/ . You should run this command inside the directory where you plan to put the mirror; when that command is run, it will fill the current directory with HTML files and subdirectories. Alternatively, you can replace . with the name of the target directory. Since rsync is incremental, there should be no reason not to update frequently, although currently there's no point in doing so more often than once a day, and our server does have a limit on the number of rsync connections. In any case, we would recommend updating no less often than once a week, in order to fetch any urgent updates such as security bugfixes. You can also subscribe to our mailing list to receive notification of new releases. We used to support an alternative method of mirroring using GNU wget , and provided a sample shell script. This is now deprecated in favour of rsync , for the following reasons: rsync uses less bandwidth; the rsync method moves all the post-processing complexity to our end, so we can implement changes and deal with bugs much more easily - and in particular, it allows us to insert a note to the effect that the mirrored site is a mirror site to reduce general confusion; we've had trouble in the past with mirroring wget s going mad and eating all our host's bandwidth/CPU, which rsync hasn't yet done, to our knowledge. Once you've set up your mirror, mail us with its address and the country it's in. However, before notifying us, please do test that it works: Check that the binary download links work. Check that the binary download links point at your site, not ours. If they point straight back to our own binary downloads, there is not much point in having the mirror site in the first place! Check that the on-line documentation for the latest release works, and points at your site not ours. If you want to comment on this web site, see the Feedback page . (last modified on Sat Feb 8 11:06:01 2025 )
2026-01-13T09:29:20
https://w3techs.com/technologies/cross/dns_server/ssl_certificate
Usage Survey of DNS Server Providers broken down by SSL Certificate Authorities advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by SSL Certificate Authorities Usage of DNS server providers broken down by SSL certificate authorities Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by SSL certificate authorities. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 5.0% of all the websites that use Let’s Encrypt as SSL certificate authority.     Cloudflare 15.4% 5.0% 52.6% 5.3% 4.4% 4.8%     GoDaddy Group 10.1% 8.0% 12.9% 4.2% 45.1% 6.6%     Newfold Digital Group 4.0% 5.0% 2.3% 4.0% 2.0% 2.4%     W3Techs.com, 13 January 2026 Overall Let’s Encrypt GlobalSign Sectigo GoDaddy Group DigiCert Group Percentages of websites using various DNS server providers broken down by SSL certificate authorities More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://www.linkedin.com/products/a10networks-a10-thunder-tps-ddos-defense-solutions/?trk=products_seo_search
A10 Defend - Intelligent & Automated DDoS Protection | LinkedIn Skip to main content LinkedIn A10 Networks, Inc in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in A10 Defend - Intelligent & Automated DDoS Protection DDoS Protection Software by A10 Networks, Inc See who's skilled in this Add as skill Learn more Report this product About A10 Defend provides a holistic DDoS protection solution that is scalable, economical, precise, and intelligent to help customers ensure optimal user and subscriber experiences. Media Products media viewer No more previous content A10 Defend Threat Control The A10 Defend suite, composed of A10 Defend Mitigator, A10 Defend Detector, A10 Defend Threat Control, and A10 Defend Orchestrator, provides a holistic solution that is scalable, economical, precise, and intelligent to help customers ensure optimal user and subscriber experiences. A10 Defend Suite Used by the top service providers and online gaming companies, the A10 Defend suite consists of several components. A10 Defend Detector efficiently identifies abnormal traffic, A10 Defend Mitigator (previously Thunder TPS) automatically and intelligently mitigates the identified inbound DDoS attack, A10 Defend Threat Control proactively provides standalone layered defense and actionable insights, and A10 Defend Orchestrator (previously aGalaxy) provides seamless DDoS defense execution. Demo: A10 Defend Threat Control A10 Defend Threat Control provides a robust first layer of DDoS defense. By leveraging proprietary ML/AL-enhanced data processing techniques, Threat Control proactively monitors attackers and understands key DDoS attack methods, with or without dedicated DDoS prevention solutions. No more next content Featured customers of A10 Defend - Intelligent & Automated DDoS Protection​ World Wide Technology IT Services and IT Consulting 806,684 followers Fastly Software Development 64,600 followers Imperium Dynamics IT Services and IT Consulting 18,464 followers Dell Technologies Computer Hardware Manufacturing 5,431,605 followers Similar products Cloudflare DDoS Protection Cloudflare DDoS Protection DDoS Protection Software Cloudflare Spectrum Cloudflare Spectrum DDoS Protection Software Akamai Prolexic Routed Akamai Prolexic Routed DDoS Protection Software OVHcloud Anti-DDoS Protection OVHcloud Anti-DDoS Protection DDoS Protection Software Kona DDoS Defender Kona DDoS Defender DDoS Protection Software Kaspersky DDoS Protection Kaspersky DDoS Protection DDoS Protection Software Sign in to see more Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-squarespace
Usage Statistics and Market Share of Squarespace as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Technology Changes Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > Squarespace Usage statistics of Squarespace as DNS server provider Request an extensive Squarespace market report. Learn more These diagrams show the usage statistics of Squarespace as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. Squarespace is used as DNS server provider by 0.9% of all the websites. Historical trend This diagram shows the historical trend in the percentage of websites using Squarespace. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of Squarespace compared to all other DNS server providers in our Squarespace market report . Market position This diagram shows the market position of Squarespace in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using Squarespace Squarespace.com Bio.site used on inner pages Wickedwhimsmod.com Random selection of sites using Squarespace Electrolinkchargers.com Petersburgarts.com Madalingiurgescu.com Aquafyiv.com Sharedcitysharedspace.net Sites using Squarespace only recently Jamestown.org Wewordle.org Appraisalscout.com Gipartnersofil.com Studentluxe.co.uk More examples of sites You can find more examples of sites using Squarespace in our Squarespace market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of Squarespace with SiteGround and Tilda and DigitalOcean . Free technology usage monitoring service Get a notification when a top site starts using Squarespace. Share this page Technology Brief Squarespace Category: DNS Server Providers Squarespace offers a hosted web publishing platform. Website: squarespace.com advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/cross/dns_server/image_format
Usage Survey of DNS Server Providers broken down by Image File Formats advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Image File Formats Usage of DNS server providers broken down by image file formats Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by image file formats. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 15.2% of all the websites that use PNG as image file format.     Cloudflare 15.4% 15.2% 14.3% 15.8% 22.5% 15.3% 33.7%     GoDaddy Group 10.1% 10.1% 9.9% 11.2% 10.0% 8.1% 11.9%     Newfold Digital Group 4.0% 4.3% 4.4% 3.6% 3.6% 4.3% 2.4%     W3Techs.com, 13 January 2026 Overall PNG JPEG SVG WebP GIF AVIF Percentages of websites using various DNS server providers broken down by image file formats More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/details/dn-groupgmointernet
Usage Statistics and Market Share of GMO Internet Group as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Subcategories History Technology Changes Market Top Site Usage Market Position Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > GMO Internet Group Usage statistics of GMO Internet Group as DNS server provider Request an extensive GMO Internet Group market report. Learn more These diagrams show the usage statistics of GMO Internet Group as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. GMO Internet Group is used as DNS server provider by 1.8% of all the websites. Subcategories of GMO Internet Group This diagram shows the percentages of websites using various subcategories of GMO Internet Group. How to read the diagram: Onamae is used by 26.4% of all the websites who use GMO Internet Group Onamae 26.4% GMO Pepabo 19.4% GMO Internet 18.9% Lolipop 16.4% ConoHa 8.4% Heteml 5.1% GMO GlobalSign 3.9% Color Me Shop 1.6% GMO DigiRock less than 0.1% W3Techs.com, 13 January 2026 Percentages of websites using various subcategories of GMO Internet Group Note: a website may use more than one subcategory of GMO Internet Group Historical trend This diagram shows the historical trend in the percentage of websites using GMO Internet Group. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of GMO Internet Group compared to all other DNS server providers in our GMO Internet Group market report . Market position This diagram shows the market position of GMO Internet Group in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using GMO Internet Group Fantia.jp Esuteru.com Unitedcinemas.jp Shalove.net Trepy.jp Life-n.jp Zerocha.jp Movacal.net Point-news.jp Gentosha-go.com Random selection of sites using GMO Internet Group Sinkyu-kazamidori.com Fresh-olive.com Hipragga.com Ezcompany.jp Hirodental.com Sites using GMO Internet Group only recently Bizsoft.jp Admcom.co.jp Jobcan.biz Sayonari.com Fukumitsu-sc.com More examples of sites You can find more examples of sites using GMO Internet Group in our GMO Internet Group market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of GMO Internet Group with SiteGround and Namecheap and Name.com . Free technology usage monitoring service Get a notification when a top site starts using GMO Internet Group. Share this page Technology Brief GMO Internet Group Category: DNS Server Providers GMO Internet Group is a Japanese holding of IT and internet service providers. Website: group.gmo advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/terms_of_use
Terms Of Use advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here W3Techs - Terms Of Use Bottom Line Q-Success provides this service free of charge "as is" with no warranties whatsoever, and cannot be held responsible for any damages arising from the use of this web site. Users that sign-up to this service are responsible for activities performed using their user account, including, but not limited to, content made available on this web site. Agreement This web site is provided to you free of charge "as is" by Q-Success DI Gelbmann GmbH. By using the information and services available through this web site, you are agreeing to the terms and conditions contained herein. Liability Disclaimer The information and services available to you on this web site may contain errors and are subject to periods of interruption. While Q-Success tries to maintain the services it offers on the web site, it cannot be held responsible for any errors, defects, lost profits or other consequential damages arising from the use of this web site. Q-Success provides the information and services available on this web site "as is", with no warranties whatsoever. Q-Success reserves the right to change, modify or terminate this service or parts of it at any time and without notice. All express warranties and all implied warranties, including warranties of merchantability and fitness for a particular purpose, and non-infringement of proprietary rights are hereby disclaimed to the fullest extent permitted by law. In no event shall Q-Success be liable for any direct, indirect, incidental or consequential damages, or any damages whatsoever, arising from the use or performance of this web site or from any information or services provided through this web site, even if Q-Success has been advised of the possibility of such damages. If you are dissatisfied with this web site, or any portion thereof, your exclusive remedy shall be to stop using the web site. User accounts During the sign-up process you are assigned a user name and password. You and you alone are solely responsible for maintaining the confidentiality of your password and information associated with your account that you desire to remain confidential. You also agree that you are responsible for any and all activities that may take place, or occur under your password and account. You further agree to notify Q-Success in the event your password or account has been used without the proper authorization or there are other breaches of security of which you become aware. Q-Success will not be responsible or liable for any loss or damage incurred, or later arising from your failure to comply with this section. Q-Success prohibits the transfer of control of any W3Techs.com account by the registered account holder to any other individual or party. Misuse Using the site for any other than its intended purpose is considered misuse. Q-Success reserves the right to suspend or terminate user accounts, in whole or in part, or prohibit further use of the service, at any time and without notice. User-provided Content With regard to content you make available in any publicly accessible areas of this web site, you hereby grant Q-Success the worldwide, royalty-free, perpetual, irrevocable, and non-exclusive license to use, reproduce, modify, translate, display, create derivative works from, and publish such content on or in connection with W3Techs.com or other sites. It is your responsibility to ensure that content you make available is meant for public display, which excludes adult or mature content and links to sites with adult or mature content. It is also your responsibility to ensure that content you make available does not violate copyrights held by third parties, and that making that content available on the web does not violate any local, national or international law. You acknowledge that Q-Success in its sole discretion may choose not to display any content you make available or to remove content you make available from its servers without notice. Links to other Sites This site contains links to other sites. The links and the linked sites are not under the control of Q-Success, and Q-Success is not responsible for the contents of any linked site, and the inclusion of any link does not imply endorsement by Q-Success of the site. Quotations If you quote information from this web site, you must include a reference. The reference should contain a date, because information on this site changes frequently. If you publish the quote on the Internet, a link to the original page on this site must be provided. Miscellaneous These terms and conditions shall exclusively be governed by and construed in accordance with the laws of Austria, and you agree to submit to the personal jurisdiction of the appropriate courts of Austria. In the event that any portion of these terms and conditions is deemed by a court to be invalid, the remaining provisions shall remain in full force and effect. Copyright Notice All contents of this web site are Copyright © 2009-2026 Q-Success DI Gelbmann GmbH. All rights reserved. Last Modification of this Page 18 November 2009 About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/cross/dns_server/structured_data
Usage Survey of DNS Server Providers broken down by Structured Data Formats advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Structured Data Usage of DNS server providers broken down by structured data formats Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by structured data formats. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 15.7% of all the websites that use Open Graph as structured data format.     Cloudflare 15.4% 15.7% 16.0% 15.6% 17.6% 14.5% 20.9%     GoDaddy Group 10.1% 11.1% 12.0% 10.9% 9.0% 8.4% 8.1%     Newfold Digital Group 4.0% 3.8% 3.8% 3.8% 4.1% 3.9% 2.9%     W3Techs.com, 13 January 2026 Overall Open Graph Twitter/X Cards JSON-LD Generic RDFa Microdata Dublin Core Percentages of websites using various DNS server providers broken down by structured data formats More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://www.linkedin.com/uas/login?session_redirect=%2Fproducts%2Fstaffbase&trk=products_details_guest_primary_call_to_action
LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional))
2026-01-13T09:29:20
https://w3techs.com/technologies/cross/dns_server/markup_language
Usage Survey of DNS Server Providers broken down by Markup Languages advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Markup Languages Usage of DNS server providers broken down by markup languages Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by markup languages. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 15.6% of all the websites that use HTML as markup language.     Cloudflare 15.4% 15.6% 9.4%     GoDaddy Group 10.1% 10.2% 6.5%     Newfold Digital Group 4.0% 4.0% 4.9%     W3Techs.com, 13 January 2026 Overall HTML XHTML Percentages of websites using various DNS server providers broken down by markup languages More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://www.linkedin.com/uas/login?session_redirect=%2Fproducts%2Fleadsquared-marketing-automation%3FviewConnections%3Dtrue&trk=products_details_guest_face-pile-cta
LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional))
2026-01-13T09:29:20
https://w3techs.com/technologies/cross/dns_server/css_framework
Usage Survey of DNS Server Providers broken down by CSS Frameworks advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by CSS Frameworks Usage of DNS server providers broken down by CSS frameworks Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by CSS frameworks. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 16.7% of all the websites that use Bootstrap as CSS framework.     Cloudflare 15.4% 16.7% 14.3% 17.0% 29.5%     GoDaddy Group 10.1% 9.1% 9.9% 13.3% 5.7%     Newfold Digital Group 4.0% 4.9% 5.1% 6.1% 2.4%     W3Techs.com, 13 January 2026 Overall Bootstrap Animate Foundation Tailwind Percentages of websites using various DNS server providers broken down by CSS frameworks More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://www.linkedin.com/products/dataport-a%C3%B6r-dprojecttracking/?trk=products_details_guest_other_products_by_org_section_product_link_result-card_full-click#main-content
dProjectTracking | LinkedIn Skip to main content LinkedIn Dataport AöR in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in dProjectTracking Project Management Software by Dataport AöR See who's skilled in this Add as skill Learn more Report this product About dProjectTracking ist Ihre webbasierte Ticketlösung, mit der Sie für effizientes Teamwork sorgen. Sie basiert auf der Software JIRA von Atlassian, wird aber im BSI-zertifizierten Twin Data Center betrieben. Mit dProjectTracking steuern Sie Projekte agil nach Scrum und KANBAN. Schnell verteilen und bearbeiten Sie Aufgaben, behalten dabei aber stets den Überblick. Auf einen Blick: ✅ Scrum, Kanban Boards ✅ Berichte, Dashboards ✅ Attraktive Projekt-, User- und Speicherplatzpakete ✅ Flexible Laufzeit This product is intended for Referent Head of Section Clerical Officer Civil Servant Scrum Master Project Manager Senior Software Engineer Process Manager Information Technology Application Manager Media Products media viewer No more previous content dProjectTracking Ein Projekt-Tickettool für alle Zwecke No more next content Similar products Jira Jira Project Management Software Trello Trello Project Management Software GitHub Issues GitHub Issues Project Management Software Notion Notion Knowledge Management Software Zoho Projects Zoho Projects Project Management Software Asana Asana Project Management Software Sign in to see more Show more Show less Dataport AöR products data[port]ai data[port]ai Data Science & Machine Learning Platforms Dataport Consulting Dataport Consulting Strategic Planning Software dDataBox dDataBox Managed File Transfer (MFT) Software dMessenger dMessenger Enterprise Messaging Software dWebService dWebService Web Hosting dWorkflow dWorkflow Workflow Management Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:20
https://w3techs.com/technologies/cross/dns_server/content_management
Usage Survey of DNS Server Providers broken down by Content Management Systems advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Content Management Usage of DNS server providers broken down by content management systems Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by content management systems. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 15.7% of all the websites that use WordPress as content management system.     Cloudflare 15.4% 15.7% 7.0% 1.6% 3.5% 6.5% 50.9% 3.1% 15.2% 7.2% 17.9%     GoDaddy Group 10.1% 8.2% 30.3% 4.3% 20.4% 4.1% 25.1% 1.9% 6.4% 19.7% 6.8%     Newfold Digital Group 4.0% 4.9% 4.5% 0.8% 5.5% 2.5% 3.7% 0.2% 3.4% 4.2% 3.8%     W3Techs.com, 13 January 2026 Overall WordPress Shopify Wix Squarespace Joomla Webflow Tilda Drupal Duda Adobe Systems Percentages of websites using various DNS server providers broken down by content management systems More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://w3techs.com/technologies/overview/structured_data
Usage Statistics of Structured Data Formats for Websites, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends History Market Top Site Usage Market Position Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > Structured Data Usage statistics of structured data formats for websites This diagram shows the percentages of websites using various structured data formats. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. How to read the diagram: 21.7% of the websites use none of the structured data formats that we monitor. Open Graph is used by 69.8% of all the websites. None 21.7% Open Graph 69.8% Twitter/X Cards 55.3% JSON-LD 52.6% Generic RDFa 39.4% Microdata 23.1% Dublin Core 0.8% Microformats 0.5% W3Techs.com, 13 January 2026 Percentages of websites using various structured data formats Note: a website may use more than one structured data format Is there a technology missing? Registered users can make a proposal to add a technology. Do you want to stay informed about this survey? Use our monthly technology survey RSS Feed . Registered users can also subscribe to a monthly technology survey email. Share this page Technology Brief Structured Data Formats Structured data formats allow search engines and other bots to extract specific data from web pages, e.g. information about the organization. No related forum entry yet structured data forum advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://www.linkedin.com/uas/login?session_redirect=%2Fservices%2Fproducts%2Fstaffbase%2F&fromSignIn=true&trk=products_details_guest_nav-header-signin
LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional))
2026-01-13T09:29:20
https://www.linkedin.com/products/categories/chatbot-software?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_subtitle-click
Best Chatbot Software | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Marketing Manager (13) Customer Service Specialist (10) Owner (9) Sales And Marketing Specialist (9) Marketing Specialist (9) See all products Find top products in Chatbot Software category Software used to simulate conversation based on natural language input. - Engage users with automated chat for marketing, customer service, or information - Use text or text-to-speech input processing to provide scripted conversation - Respond contextually with standalone language recognition solution - Execute pre-built responses on apps, websites, and social media 273 results Blip Chatbot Software by Blip Use a Inteligência Conversacional da Blip para aumentar receita e fidelidade! Com a plataforma Blip integrada ao super poder da Inteligência Artificial, você oferece uma experiência excepcional ao longo da jornada do cliente — desde marketing, vendas até o suporte. View product RD Station Conversas Chatbot Software by RD Station RD Station Conversas é uma plataforma de atendimento e vendas digitais que promove a integração de diversos canais de comunicação e números de WhatsApp. Assim, você pode centralizar no RD Station Conversas todos os seus canais (WhatsApp. Telegram, Instagram Messenger e Facebook Messenger), permitindo que o time inteiro de atendimento relacione-se com Leads e clientes em um só local. RD Station Conversas pode ser usada por qualquer tipo de negócio que conta com times de vendas e atendimento que usam os canais digitais. Quanto maior for o investimento e relevância digital da marca, maiores são os resultados com o RD Station Conversas. View product Customer Engagement Chatbot Software by Truora Inc. Create automated conversations on WhatsApp and provide customer service, marketing and sales chatbots. View product Chatbot Chatbot Software by Unifonic Unifonic Chatbot is a visual tool for building fully functional chatbots using a drag-and-drop creator. Users can build conversational flows with ease and flexibility in minutes without any coding required. It currently supports WhatsApp, Webchat, Twitter Direct Messenger, and Facebook Messenger. View product Omnichat Chatbot Software by Omnichat (1) Omnichannel CRM Integration: Manage all communication channels in one place, including Official WhatsApp Business API, Facebook, Instagram, WeChat, LINE and Website live chat. 24/7 Chatbot automation:Respond to messages without delay (2) Marketing Automation: Customer Browsing Behaviour Tracking, Abandoned Cart Remarketing through WhatsApp, Facebook and LINE. Gamification and Coupon Marketing. (3) OMO ( Online-Merge-Offline ) Sales: Omnichat’ s system will automatically bind the customer with the specific salesperson for WhatsApp/ LINE 1 on 1 selling. Once customers complete the purchase, the system will calculate the revenue of the salesperson and facilitate cross-channel revenue tracking. (4) Social CDP:Map different social channel identities to an unique profile, Send out messages at specific time frame automatically with a comprehensive customer journey. eg When customers pay first visit, join membership, second purchase, etc. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights ELX Chatbot Chatbot Software by EchoLogyx Ltd Ultimate AI-powered assistant for eCommerce businesses. Automate customer support, boost sales, and improve customer engagement. View product ChatGuru Chatbot Software by ChatGuru Nossas soluções de automação Chatbot ChatGuru para WhatsApp são focadas em aumento de vendas, otimização do atendimento e organização da equipe e atenderão perfeitamente sua empresa, seja qual for seu segmento. Alguns de nossos recursos: • Vários números de WhatsApp em uma única conta. #OTIMIZAÇÃO • Use 1 número WhatsApp com múltiplos usuários #PRATICIDADE • Mensagens rápidas para maior agilidade #AGILIDADE • Gerenciamento de permissões dos agentes #CONTROLE • Integração com CRMs #INTEGRAÇÃO • Funil automático e inteligente #ACOMPANHAMENTO • Banco de respostas automáticas com I.A. #INTELIGÊNCIA • Rodízio de atendimento #ATRIBUIÇÃO • Relatórios customizáveis #PRATICIDADE • Campanhas personalizadas #PERSONALIZAÇÃO • Campos personalizáveis #ADAPTAÇÃO • Anotações internas em atendimentos e contatos #TAGS • Alertas de novos leads gerados no WhatsApp #OTIMIZAÇÃO • Organização por status #ORGANIZAÇÃO Solicite uma demonstração e descubra o CHATGURU! View product Manychat for Facebook Messenger Chatbot Software by Manychat Manychat for Messenger automates conversations to fuel more sales, generate leads, automate FAQs and run marketing campaigns View product Leadster Chatbot Software by Leadster Leadster is a smart chatbot designed to optimize the conversion of website visitors into qualified leads. Unlike traditional chat tools, our technology acts as an automated “sales consultant”: engaging visitors, qualifying opportunities, and routing sales-ready leads to your team. The platform operates as a 24/7 qualification engine—capturing buying intent at the right moment, reducing CAC, and accelerating the sales cycle. Instead of generic forms, we offer an interactive approach that enhances both conversion rates and user experience. With customizable flows and seamless integration with CRMs and marketing tools, Leadster turns your website traffic into a real sales pipeline. All this with easy setup, free trial, and a strong focus on performance. Perfect for companies looking to scale acquisition without relying solely on paid media or SDRs. View product BeyondChats Chatbot Software by BeyondChats - Automate user inquiries on your website with our advanced AI chatbot - Identify users who are most likely to buy your product / service - Analytics to helps you understand: What are your users looking for, Why are they leaving without registering / buying, What is missing on your website View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language
2026-01-13T09:29:20
https://www.chiark.greenend.org.uk/~sgtatham/putty/feedback.html
PuTTY Feedback and Bug Reporting PuTTY Feedback and Bug Reporting Home | FAQ | Feedback | Licence | Updates | Mirrors | Keys | Links | Team Download: Stable · Snapshot | Docs | Privacy | Changes | Wishlist Appendix B: Feedback and bug reporting B.1 General guidelines B.1.1 Sending large attachments B.2 Reporting bugs B.3 Reporting security vulnerabilities B.4 Requesting extra features B.5 Requesting features that have already been requested B.6 Workarounds for SSH server bugs B.7 Support requests B.8 Web server administration B.9 Asking permission for things B.10 Mirroring the PuTTY web site B.11 Praise and compliments B.12 E-mail address Appendix B: Feedback and bug reporting This is a guide to providing feedback to the PuTTY development team. It is provided as both a web page on the PuTTY site, and an appendix in the PuTTY manual. Section B.1 gives some general guidelines for sending any kind of e-mail to the development team. Following sections give more specific guidelines for particular types of e-mail, such as bug reports and feature requests. B.1 General guidelines The PuTTY development team gets a lot of mail. If you can possibly solve your own problem by reading the manual, reading the FAQ, reading the web site, asking a fellow user, or some other means, then it would make our lives much easier. We get so much e-mail that we literally do not have time to answer it all. We regret this, but there's nothing we can do about it. So if you can possibly avoid sending mail to the PuTTY team, we recommend you do so. In particular, support requests ( section B.7 ) are probably better sent to some public forum, or passed to a local expert if possible. The PuTTY contact email address is a private mailing list containing four or five core developers. Don't be put off by it being a mailing list: if you need to send confidential data as part of a bug report, you can trust the people on the list to respect that confidence. Also, the archives aren't publicly available, so you shouldn't be letting yourself in for any spam by sending us mail. Please use a meaningful subject line on your message. We get a lot of mail, and it's hard to find the message we're looking for if they all have subject lines like ‘PuTTY bug’. B.1.1 Sending large attachments Since the PuTTY contact address is a mailing list, e-mails larger than 40Kb will be held for inspection by the list administrator, and will not be allowed through unless they really appear to be worth their large size. If you are considering sending any kind of large data file to the PuTTY team, it's almost always a bad idea, or at the very least it would be better to ask us first whether we actually need the file. Alternatively, you could put the file on a web site and just send us the URL; that way, we don't have to download it unless we decide we actually need it, and only one of us needs to download it instead of it being automatically copied to all the developers. (If the file contains confidential information, then you could encrypt it with our Secure Contact Key; see section F.1 for details. Please only use this for information that needs to be confidential.) Some people like to send mail in MS Word format. Please don't send us bug reports, or any other mail, as a Word document. Word documents are roughly fifty times larger than writing the same report in plain text. In addition, most of the PuTTY team read their e-mail on Unix machines, so copying the file to a Windows box to run Word is very inconvenient. Not only that, but several of us don't even have a copy of Word! Some people like to send us screen shots when demonstrating a problem. Please don't do this without checking with us first - we almost never actually need the information in the screen shot. Sending a screen shot of an error box is almost certainly unnecessary when you could just tell us in plain text what the error was. (On some versions of Windows, pressing Ctrl-C when the error box is displayed will copy the text of the message to the clipboard.) Sending a full-screen shot is occasionally useful, but it's probably still wise to check whether we need it before sending it. If you must mail a screen shot, don't send it as a .BMP file. BMP s have no compression and they are much larger than other image formats such as PNG, TIFF and GIF. Convert the file to a properly compressed image format before sending it. Please don't mail us executables, at all. Our mail server blocks all incoming e-mail containing executables, as a defence against the vast numbers of e-mail viruses we receive every day. If you mail us an executable, it will just bounce. If you have made a tiny modification to the PuTTY code, please send us a patch to the source code if possible, rather than sending us a huge .ZIP file containing the complete sources plus your modification. If you've only changed 10 lines, we'd prefer to receive a mail that's 30 lines long than one containing multiple megabytes of data we already have. B.2 Reporting bugs If you think you have found a bug in PuTTY, your first steps should be: Check the Wishlist page on the PuTTY website, and see if we already know about the problem. If we do, it is almost certainly not necessary to mail us about it, unless you think you have extra information that might be helpful to us in fixing it. (Of course, if we actually need specific extra information about a particular bug, the Wishlist page will say so.) Check the Change Log on the PuTTY website, and see if we have already fixed the bug in the development snapshots. Check the FAQ on the PuTTY website (also provided as appendix A in the manual), and see if it answers your question. The FAQ lists the most common things which people think are bugs, but which aren't bugs. Download the latest development snapshot and see if the problem still happens with that. This really is worth doing. As a general rule we aren't very interested in bugs that appear in the release version but not in the development version, because that usually means they are bugs we have already fixed . On the other hand, if you can find a bug in the development version that doesn't appear in the release, that's likely to be a new bug we've introduced since the release and we're definitely interested in it. If none of those options solved your problem, and you still need to report a bug to us, it is useful if you include some general information: Tell us what version of PuTTY you are running. To find this out, use the ‘About PuTTY’ option from the System menu. Please do not just tell us ‘I'm running the latest version’; e-mail can be delayed and it may not be obvious which version was the latest at the time you sent the message. PuTTY is a multi-platform application; tell us what version of what OS you are running PuTTY on. (If you're running on Unix, or Windows for Arm, tell us, or we'll assume you're running on Windows for Intel as this is overwhelmingly the case.) Tell us what protocol you are connecting with: SSH, Telnet, Rlogin, SUPDUP, or Raw mode, or a serial connection. Tell us what kind of server you are connecting to; what OS, and if possible what SSH server (if you're using SSH). You can get some of this information from the PuTTY Event Log (see section 3.1.3.1 in the manual). Send us the contents of the PuTTY Event Log, unless you have a specific reason not to (for example, if it contains confidential information that you think we should be able to solve your problem without needing to know). Try to give us as much information as you can to help us see the problem for ourselves. If possible, give us a step-by-step sequence of precise instructions for reproducing the fault. Don't just tell us that PuTTY ‘does the wrong thing’; tell us exactly and precisely what it did, and also tell us exactly and precisely what you think it should have done instead. Some people tell us PuTTY does the wrong thing, and it turns out that it was doing the right thing and their expectations were wrong. Help to avoid this problem by telling us exactly what you think it should have done, and exactly what it did do. If you think you can, you're welcome to try to fix the problem yourself. A patch to the code which fixes a bug is an excellent addition to a bug report. However, a patch is never a substitute for a good bug report; if your patch is wrong or inappropriate, and you haven't supplied us with full information about the actual bug, then we won't be able to find a better solution. https://www.chiark.greenend.org.uk/~sgtatham/bugs.html is an article on how to report bugs effectively in general. If your bug report is particularly unclear, we may ask you to go away, read this article, and then report the bug again. It is reasonable to report bugs in PuTTY's documentation, if you think the documentation is unclear or unhelpful. But we do need to be given exact details of what you think the documentation has failed to tell you, or how you think it could be made clearer. If your problem is simply that you don't understand the documentation, we suggest asking around and seeing if someone will explain what you need to know. Then , if you think the documentation could usefully have told you that, send us a bug report and explain how you think we should change it. B.3 Reporting security vulnerabilities If you've found a security vulnerability in PuTTY, you might well want to notify us using an encrypted communications channel, to avoid disclosing information about the vulnerability before a fixed release is available. For this purpose, we provide a GPG key suitable for encryption: the Secure Contact Key. See section F.1 for details of this. (Of course, vulnerabilities are also bugs, so please do include as much information as possible about them, the same way you would with any other bug report.) B.4 Requesting extra features If you want to request a new feature in PuTTY, the very first things you should do are: Check the Wishlist page on the PuTTY website, and see if your feature is already on the list. If it is, it probably won't achieve very much to repeat the request. (But see section B.5 if you want to persuade us to give your particular feature higher priority.) Check the Wishlist and Change Log on the PuTTY website, and see if we have already added your feature in the development snapshots. If it isn't clear, download the latest development snapshot and see if the feature is present. If it is, then it will also be in the next release and there is no need to mail us at all. If you can't find your feature in either the development snapshots or the Wishlist, then you probably do need to submit a feature request. Since the PuTTY authors are very busy, it helps if you try to do some of the work for us: Do as much of the design as you can. Think about ‘corner cases’; think about how your feature interacts with other existing features. Think about the user interface; if you can't come up with a simple and intuitive interface to your feature, you shouldn't be surprised if we can't either. Always imagine whether it's possible for there to be more than one, or less than one, of something you'd assumed there would be one of. (For example, if you were to want PuTTY to put an icon in the System tray rather than the Taskbar, you should think about what happens if there's more than one PuTTY active; how would the user tell which was which?) If you can program, it may be worth offering to write the feature yourself and send us a patch. However, it is likely to be helpful if you confer with us first; there may be design issues you haven't thought of, or we may be about to make big changes to the code which your patch would clash with, or something. If you check with the maintainers first, there is a better chance of your code actually being usable. Also, read the design principles listed in appendix E : if you do not conform to them, we will probably not be able to accept your patch. B.5 Requesting features that have already been requested If a feature is already listed on the Wishlist, then it usually means we would like to add it to PuTTY at some point. However, this may not be in the near future. If there's a feature on the Wishlist which you would like to see in the near future, there are several things you can do to try to increase its priority level: Mail us and vote for it. (Be sure to mention that you've seen it on the Wishlist, or we might think you haven't even read the Wishlist). This probably won't have very much effect; if a huge number of people vote for something then it may make a difference, but one or two extra votes for a particular feature are unlikely to change our priority list immediately. Offering a new and compelling justification might help. Also, don't expect a reply. Offer us money if we do the work sooner rather than later. This sometimes works, but not always. The PuTTY team all have full-time jobs and we're doing all of this work in our free time; we may sometimes be willing to give up some more of our free time in exchange for some money, but if you try to bribe us for a big feature it's entirely possible that we simply won't have the time to spare - whether you pay us or not. (Also, we don't accept bribes to add bad features to the Wishlist, because our desire to provide high-quality software to the users comes first.) Offer to help us write the code. This is probably the only way to get a feature implemented quickly, if it's a big one that we don't have time to do ourselves. B.6 Workarounds for SSH server bugs It's normal for SSH implementations to automatically enable workarounds for each other's bugs, using the software version strings that are exchanged at the start of the connection. Typically an SSH client will have a list of server version strings that it believes to have particular bugs, and auto-enable the appropriate set of workarounds when it sees one of those strings. (And servers will have a similar list of workarounds for client software they believe to be buggy.) If you've found a bug in an SSH server, and you'd like us to add an auto-detected workaround for it, our policy is that the server implementor should fix it first . If the server implementor has fixed it in the latest version, and can give us a complete description of the version strings that go with the bug, then we're happy to use those version strings as a trigger to automatically enable our workaround (assuming one is possible). We won't accept requests to auto-enable workarounds for an open-ended set of version strings, such as ‘any version of FooServer, including future ones not yet released’. The aim of this policy is to encourage implementors to gradually converge on the actual standardised SSH protocol. If we enable people to continue violating the spec, by installing open-ended workarounds in PuTTY for bugs they're never going to fix, then we're contributing to an ecosystem in which everyone carries on having bugs and everyone else carries on having to work around them. An exception: if an SSH server is no longer maintained at all (e.g. the company that produced it has gone out of business), and every version of it that was ever released has a bug, then that's one situation in which we may be prepared to add a workaround rule that matches all versions of that software. (The aim is to stop implementors from continuing to release software with the bug – and if they're not releasing it at all any more, then that's already done!) We do recognise that sometimes it will be difficult to get the server maintainer to fix a bug, or even to answer support requests at all. Or it might take them a very long time to get round to doing anything about it. We're not completely unwilling to compromise: we're prepared to add manually enabled workarounds to PuTTY even for bugs that an implementation hasn't fixed yet. We just won't automatically enable the workaround unless the server maintainer has also done their part. B.7 Support requests If you're trying to make PuTTY do something for you and it isn't working, but you're not sure whether it's a bug or not, then please consider looking for help somewhere else. This is one of the most common types of mail the PuTTY team receives, and we simply don't have time to answer all the questions. Questions of this type include: If you want to do something with PuTTY but have no idea where to start, and reading the manual hasn't helped, try posting to a public forum and see if someone can explain it to you. If you have tried to do something with PuTTY but it hasn't worked, and you aren't sure whether it's a bug in PuTTY or a bug in your SSH server or simply that you're not doing it right, then try posting to some public forum and see if someone can solve your problem. Or try doing the same thing with a different SSH client and see if it works with that. Please do not report it as a PuTTY bug unless you are really sure it is a bug in PuTTY. If someone else installed PuTTY for you, or you're using PuTTY on someone else's computer, try asking them for help first. They're more likely to understand how they installed it and what they expected you to use it for than we are. If you have successfully made a connection to your server and now need to know what to type at the server's command prompt, or other details of how to use the server-end software, talk to your server's system administrator. This is not the PuTTY team's problem. PuTTY is only a communications tool, like a telephone; if you can't speak the same language as the person at the other end of the phone, it isn't the telephone company's job to teach it to you. If you absolutely cannot get a support question answered any other way, you can try mailing it to us, but we can't guarantee to have time to answer it. B.8 Web server administration If the PuTTY web site is down (Connection Timed Out), please don't bother mailing us to tell us about it. Most of us read our e-mail on the same machines that host the web site, so if those machines are down then we will notice before we read our e-mail. So there's no point telling us our servers are down. Of course, if the web site has some other error (Connection Refused, 404 Not Found, 403 Forbidden, or something else) then we might not have noticed and it might still be worth telling us about it. If you want to report a problem with our web site, check that you're looking at our real web site and not a mirror. The real web site is at https://www.chiark.greenend.org.uk/~sgtatham/putty/ ; if that's not where you're reading this, then don't report the problem to us until you've checked that it's really a problem with the main site. If it's only a problem with the mirror, you should try to contact the administrator of that mirror site first, and only contact us if that doesn't solve the problem (in case we need to remove the mirror from our list). B.9 Asking permission for things PuTTY is distributed under the MIT Licence (see appendix D for details). This means you can do almost anything you like with our software, our source code, and our documentation. The only things you aren't allowed to do are to remove our copyright notices or the licence text itself, or to hold us legally responsible if something goes wrong. So if you want permission to include PuTTY on a magazine cover disk, or as part of a collection of useful software on a CD or a web site, then permission is already granted . You don't have to mail us and ask. Just go ahead and do it. We don't mind. (If you want to distribute PuTTY alongside your own application for use with that application, or if you want to distribute PuTTY within your own organisation, then we recommend, but do not insist, that you offer your own first-line technical support, to answer questions about the interaction of PuTTY with your environment. If your users mail us directly, we won't be able to tell them anything useful about your specific setup.) If you want to use parts of the PuTTY source code in another program, then it might be worth mailing us to talk about technical details, but if all you want is to ask permission then you don't need to bother. You already have permission. If you just want to link to our web site, just go ahead. (It's not clear that we could stop you doing this, even if we wanted to!) B.10 Mirroring the PuTTY web site If you want to set up a mirror of the PuTTY website, go ahead and set one up. Please don't bother asking us for permission before setting up a mirror. You already have permission. If the mirror is in a country where we don't already have plenty of mirrors, we may be willing to add it to the list on our mirrors page . Read the guidelines on that page, make sure your mirror works, and email us the information listed at the bottom of the page. Note that we do not promise to list your mirror: we get a lot of mirror notifications and yours may not happen to find its way to the top of the list. Also note that we link to all our mirror sites using the rel="nofollow" attribute. Running a PuTTY mirror is not intended to be a cheap way to gain search rankings. If you have technical questions about the process of mirroring, then you might want to mail us before setting up the mirror (see also the guidelines on the Mirrors page ); but if you just want to ask for permission, you don't need to. You already have permission. B.11 Praise and compliments One of the most rewarding things about maintaining free software is getting e-mails that just say ‘thanks’. We are always happy to receive e-mails of this type. Regrettably we don't have time to answer them all in person. If you mail us a compliment and don't receive a reply, please don't think we've ignored you. We did receive it and we were happy about it; we just didn't have time to tell you so personally. To everyone who's ever sent us praise and compliments, in the past and the future: you're welcome ! B.12 E-mail address The actual address to mail is < putty@projects.tartarus.org > . If you want to comment on this web site, see the instructions above. (last modified on Sat Feb 8 11:06:02 2025 )
2026-01-13T09:29:20
https://w3techs.com/technologies/cross/dns_server/web_server
Usage Survey of DNS Server Providers broken down by Web Servers advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Web Servers Usage of DNS server providers broken down by web servers Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by web servers. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 6.7% of all the websites that use Nginx as web server.     Cloudflare 15.4% 6.7% 57.1% 2.8% 17.6% 32.8% 7.6% 5.5%     GoDaddy Group 10.1% 8.4% 14.6% 9.4% 2.8% 14.1% 15.6% 5.6%     Newfold Digital Group 4.0% 5.3% 3.5% 7.0% 1.1% 2.3% 4.5% 1.1%     W3Techs.com, 13 January 2026 Overall Nginx Cloudflare Server Apache LiteSpeed Node.js Microsoft-IIS Envoy Percentages of websites using various DNS server providers broken down by web servers More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://scholar.google.com/citations?view_op=search_authors&hl=ko&oe=ASCII&mauthors=label:ai_security
:root{--gm3-sys-color-on-surface-rgb:31,31,31}@media screen and (prefers-color-scheme:dark){:root{--gm3-sys-color-on-surface-rgb:227,227,227}}:root{--wf-harmonize-filter-light:none;--wf-harmonize-filter-dark:none}.q4Wquf,.nfoC7c{display:block;height:25vh;position:relative}@media (min-width:600px){.q4Wquf,.nfoC7c{height:150px}}.q4Wquf.Irjbwb{height:auto}@media screen and (prefers-color-scheme:dark){.q4Wquf:not(.GtvzYd){display:none}}.nfoC7c{margin:0;overflow:hidden}.PwpMUe,.lVUmD{display:block;height:100%;margin:0 auto;width:100%}.St9mde{display:block;-webkit-filter:var(--wf-harmonize-filter-light);filter:var(--wf-harmonize-filter-light);height:100%;max-width:100%;min-height:110px;position:relative;-webkit-transform:translate(-43%,-3%);-ms-transform:translate(-43%,-3%);transform:translate(-43%,-3%);width:auto;z-index:3}@media screen and (prefers-color-scheme:dark){.St9mde{-webkit-filter:var(--wf-harmonize-filter-dark);filter:var(--wf-harmonize-filter-dark)}}.PwpMUe,.lVUmD,.St9mde{-o-object-fit:contain;object-fit:contain}.wsArZ[data-ss-mode="1"] .q4Wquf,.wsArZ[data-ss-mode="1"] .St9mde{height:auto;width:100%}.wsArZ[data-ss-mode="1"] .St9mde{max-width:400px}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .q4Wquf,.NQ5OL .St9mde{height:auto;width:100%}.NQ5OL .St9mde{max-width:400px}}.q4Wquf.NWba7e,.q4Wquf.NWba7e .St9mde{height:auto}.q4Wquf.NWba7e .St9mde{height:auto;max-width:312px;width:100%}.q4Wquf.NWba7e.zpCp3 .St9mde{max-width:unset}.q4Wquf.IiQozc .St9mde{margin:0 auto;-webkit-transform:none;-ms-transform:none;transform:none}.q4Wquf.Irjbwb .St9mde{height:auto;width:100%}.q4Wquf.EEeaqf .St9mde{max-height:144px;max-width:144px}.SnAaEd{background-image:-webkit-gradient(linear,left top,left bottom,from(rgba(233,233,233,0)),color-stop(62.22%,rgba(233,233,233,0)),color-stop(40.22%,rgb(233,233,233)),to(rgba(233,233,233,0)));background-image:-webkit-linear-gradient(top,rgba(233,233,233,0) 0,rgba(233,233,233,0) 62.22%,rgb(233,233,233) 40.22%,rgba(233,233,233,0) 100%);background-image:linear-gradient(to bottom,rgba(233,233,233,0) 0,rgba(233,233,233,0) 62.22%,rgb(233,233,233) 40.22%,rgba(233,233,233,0) 100%);height:100%;left:0;overflow:hidden;position:absolute;right:0;top:0;z-index:2}@media screen and (prefers-color-scheme:dark){.SnAaEd{display:none}}.SnAaEd::after,.SnAaEd::before{content:"";display:block;height:100%;min-width:110px;position:absolute;right:-10%;-webkit-transform:rotate(-104deg);-ms-transform:rotate(-104deg);transform:rotate(-104deg);width:25vh;z-index:2}@media (min-width:600px){.SnAaEd::after,.SnAaEd::before{width:150px}}.SnAaEd::before{background-image:-webkit-gradient(linear,left top,left bottom,from(rgba(243,243,243,0)),to(rgba(243,243,243,.9)));background-image:-webkit-linear-gradient(top,rgba(243,243,243,0) 0,rgba(243,243,243,.9) 100%);background-image:linear-gradient(to bottom,rgba(243,243,243,0) 0,rgba(243,243,243,.9) 100%);bottom:-10%}.SnAaEd::after{background-image:-webkit-gradient(linear,left top,left bottom,from(rgba(255,255,255,0)),to(rgba(255,255,255,.9)));background-image:-webkit-linear-gradient(top,rgba(255,255,255,0) 0,rgba(255,255,255,.9) 100%);background-image:linear-gradient(to bottom,rgba(255,255,255,0) 0,rgba(255,255,255,.9) 100%);bottom:-80%}.wsArZ[data-ss-mode="1"] .SnAaEd~.St9mde{width:auto}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .SnAaEd~.St9mde{width:auto}}.RHNWk .St9mde{height:auto}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .RHNWk .St9mde{width:115px}}.cf660d .St9mde{-webkit-transform:translate(-9%,-3%);-ms-transform:translate(-9%,-3%);transform:translate(-9%,-3%)}.tUhwwc .St9mde{margin:auto;max-height:230px;right:0;top:-3%;-webkit-transform:none;-ms-transform:none;transform:none}.Jkvqxd .St9mde{-webkit-transform:translate(9%,-3%);-ms-transform:translate(9%,-3%);transform:translate(9%,-3%)}.onc8Ic .St9mde{-webkit-transform:translate(var( --c-ps-s,24px ),0);-ms-transform:translate(var( --c-ps-s,24px ),0);transform:translate(var( --c-ps-s,24px ),0)}.WA89Yb .St9mde{-webkit-transform:translate(0,0);-ms-transform:translate(0,0);transform:translate(0,0)}.wsArZ[data-ss-mode="1"] .XEN8Yb .St9mde{max-width:115px}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .XEN8Yb .St9mde{max-width:115px}}@media (min-width:1240px) and (orientation:landscape),all and (min-width:1600px){.KyPKed .NQ5OL .XEN8Yb .St9mde{max-width:115px}}.IsSr6b .St9mde{max-width:300px}.mmskdd .St9mde{-webkit-transform:none;-ms-transform:none;transform:none}@-webkit-keyframes quantumWizBoxInkSpread{0%{-webkit-transform:translate(-50%,-50%) scale(.2);transform:translate(-50%,-50%) scale(.2)}100%{-webkit-transform:translate(-50%,-50%) scale(2.2);transform:translate(-50%,-50%) scale(2.2)}}@keyframes quantumWizBoxInkSpread{0%{-webkit-transform:translate(-50%,-50%) scale(.2);transform:translate(-50%,-50%) scale(.2)}100%{-webkit-transform:translate(-50%,-50%) scale(2.2);transform:translate(-50%,-50%) scale(2.2)}}@-webkit-keyframes quantumWizIconFocusPulse{0%{-webkit-transform:translate(-50%,-50%) scale(1.5);transform:translate(-50%,-50%) scale(1.5);opacity:0}100%{-webkit-transform:translate(-50%,-50%) scale(2);transform:translate(-50%,-50%) scale(2);opacity:1}}@keyframes quantumWizIconFocusPulse{0%{-webkit-transform:translate(-50%,-50%) scale(1.5);transform:translate(-50%,-50%) scale(1.5);opacity:0}100%{-webkit-transform:translate(-50%,-50%) scale(2);transform:translate(-50%,-50%) scale(2);opacity:1}}@-webkit-keyframes quantumWizRadialInkSpread{0%{-webkit-transform:scale(1.5);transform:scale(1.5);opacity:0}100%{-webkit-transform:scale(2.5);transform:scale(2.5);opacity:1}}@keyframes quantumWizRadialInkSpread{0%{-webkit-transform:scale(1.5);transform:scale(1.5);opacity:0}100%{-webkit-transform:scale(2.5);transform:scale(2.5);opacity:1}}@-webkit-keyframes quantumWizRadialInkFocusPulse{0%{-webkit-transform:scale(2);transform:scale(2);opacity:0}100%{-webkit-transform:scale(2.5);transform:scale(2.5);opacity:1}}@keyframes quantumWizRadialInkFocusPulse{0%{-webkit-transform:scale(2);transform:scale(2);opacity:0}100%{-webkit-transform:scale(2.5);transform:scale(2.5);opacity:1}}:root{--wf-tfs:calc(var(--c-tfs,32)/16*1rem);--wf-tfs-bp2:calc(var(--c-tfs,36)/16*1rem);--wf-tfs-bp3:calc(var(--c-tfs,36)/16*1rem);--wf-tfs-bp5:calc(var(--c-tfs,44)/16*1rem);--wf-stfs:calc(var(--c-stfs,16)/16*1rem);--wf-stfs-bp5:calc(var(--c-stfs,16)/16*1rem)}:root{--wf-harmonize-filter-light:none;--wf-harmonize-filter-dark:none}.Dzz9Db,.GpMPBe{display:block;height:25vh;position:relative}@media (min-width:600px){.Dzz9Db,.GpMPBe{height:150px}}@media screen and (prefers-color-scheme:dark){.Dzz9Db:not(.GtvzYd){display:none}}.Dzz9Db.Irjbwb{height:auto}.GpMPBe{margin:0;overflow:hidden}.UFQPDd,.JNOvdd{display:block;height:100%;margin:0 auto;-o-object-fit:contain;object-fit:contain;width:100%}.f4ZpM{display:block;-webkit-filter:var(--wf-harmonize-filter-light);filter:var(--wf-harmonize-filter-light);height:100%;max-width:100%;min-height:110px;position:relative;-webkit-transform:translate(-43%,-3%);-ms-transform:translate(-43%,-3%);transform:translate(-43%,-3%);width:auto;z-index:3}@media screen and (prefers-color-scheme:dark){.f4ZpM{-webkit-filter:var(--wf-harmonize-filter-dark);filter:var(--wf-harmonize-filter-dark)}}.wsArZ[data-ss-mode="1"] .Dzz9Db,.wsArZ[data-ss-mode="1"] .f4ZpM{height:auto;width:100%}.wsArZ[data-ss-mode="1"] .f4ZpM{max-width:400px}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .Dzz9Db,.NQ5OL .f4ZpM{height:auto;width:100%}.NQ5OL .f4ZpM{max-width:400px}}.Dzz9Db.utFBGf,.Dzz9Db.utFBGf .f4ZpM{height:auto}.Dzz9Db.utFBGf .f4ZpM{height:auto;max-width:312px;width:100%}.Dzz9Db.utFBGf.zpCp3 .f4ZpM{max-width:unset}.Dzz9Db.IiQozc .f4ZpM{margin:0 auto;-webkit-transform:none;-ms-transform:none;transform:none}.Dzz9Db.Irjbwb .f4ZpM{height:auto;width:100%}.Dzz9Db.EEeaqf .f4ZpM{max-height:144px;max-width:144px}.nPt1pc{background-image:-webkit-gradient(linear,left top,left bottom,from(rgba(233,233,233,0)),color-stop(62.22%,rgba(233,233,233,0)),color-stop(40.22%,rgb(233,233,233)),to(rgba(233,233,233,0)));background-image:-webkit-linear-gradient(top,rgba(233,233,233,0) 0,rgba(233,233,233,0) 62.22%,rgb(233,233,233) 40.22%,rgba(233,233,233,0) 100%);background-image:linear-gradient(to bottom,rgba(233,233,233,0) 0,rgba(233,233,233,0) 62.22%,rgb(233,233,233) 40.22%,rgba(233,233,233,0) 100%);height:100%;left:0;overflow:hidden;position:absolute;right:0;top:0;z-index:2}@media screen and (prefers-color-scheme:dark){.nPt1pc{display:none}}.nPt1pc::after,.nPt1pc::before{content:"";display:block;height:100%;min-width:110px;position:absolute;right:-10%;-webkit-transform:rotate(-104deg);-ms-transform:rotate(-104deg);transform:rotate(-104deg);width:25vh;z-index:2}@media (min-width:600px){.nPt1pc::after,.nPt1pc::before{width:150px}}.nPt1pc::before{background-image:-webkit-gradient(linear,left top,left bottom,from(rgba(243,243,243,0)),to(rgba(243,243,243,.9)));background-image:-webkit-linear-gradient(top,rgba(243,243,243,0) 0,rgba(243,243,243,.9) 100%);background-image:linear-gradient(to bottom,rgba(243,243,243,0) 0,rgba(243,243,243,.9) 100%);bottom:-10%}.nPt1pc::after{background-image:-webkit-gradient(linear,left top,left bottom,from(rgba(255,255,255,0)),to(rgba(255,255,255,.9)));background-image:-webkit-linear-gradient(top,rgba(255,255,255,0) 0,rgba(255,255,255,.9) 100%);background-image:linear-gradient(to bottom,rgba(255,255,255,0) 0,rgba(255,255,255,.9) 100%);bottom:-80%}.wsArZ[data-ss-mode="1"] .nPt1pc~.f4ZpM{width:auto}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .nPt1pc~.f4ZpM{width:auto}}.ZS7CGc .f4ZpM{height:auto}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .ZS7CGc .f4ZpM{width:115px}}.qiRZ5e .f4ZpM{-webkit-transform:translate(-9%,-3%);-ms-transform:translate(-9%,-3%);transform:translate(-9%,-3%)}.vIv7Gf .f4ZpM{margin:auto;max-height:230px;right:0;top:-3%;-webkit-transform:none;-ms-transform:none;transform:none}.nvYXVd .f4ZpM{-webkit-transform:translate(9%,-3%);-ms-transform:translate(9%,-3%);transform:translate(9%,-3%)}.uOhnzd .f4ZpM{-webkit-transform:translate(24px,0);-ms-transform:translate(24px,0);transform:translate(24px,0)}.MsYMaf .f4ZpM{-webkit-transform:translate(0,0);-ms-transform:translate(0,0);transform:translate(0,0)}.wsArZ[data-ss-mode="1"] .YIi9qf .f4ZpM{max-width:115px}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .YIi9qf .f4ZpM{max-width:115px}}.QG3Xbe .f4ZpM{max-width:300px}.F6gtje .f4ZpM{-webkit-transform:none;-ms-transform:none;transform:none}@-webkit-keyframes mdc-ripple-fg-radius-in{from{-webkit-animation-timing-function:cubic-bezier(.4,0,.2,1);animation-timing-function:cubic-bezier(.4,0,.2,1);-webkit-transform:translate(var(--mdc-ripple-fg-translate-start,0)) scale(1);transform:translate(var(--mdc-ripple-fg-translate-start,0)) scale(1)}to{-webkit-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1))}}@keyframes mdc-ripple-fg-radius-in{from{-webkit-animation-timing-function:cubic-bezier(.4,0,.2,1);animation-timing-function:cubic-bezier(.4,0,.2,1);-webkit-transform:translate(var(--mdc-ripple-fg-translate-start,0)) scale(1);transform:translate(var(--mdc-ripple-fg-translate-start,0)) scale(1)}to{-webkit-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1))}}@-webkit-keyframes mdc-ripple-fg-opacity-in{from{-webkit-animation-timing-function:linear;animation-timing-function:linear;opacity:0}to{opacity:var(--mdc-ripple-fg-opacity,0)}}@keyframes mdc-ripple-fg-opacity-in{from{-webkit-animation-timing-function:linear;animation-timing-function:linear;opacity:0}to{opacity:var(--mdc-ripple-fg-opacity,0)}}@-webkit-keyframes mdc-ripple-fg-opacity-out{from{-webkit-animation-timing-function:linear;animation-timing-function:linear;opacity:var(--mdc-ripple-fg-opacity,0)}to{opacity:0}}@keyframes mdc-ripple-fg-opacity-out{from{-webkit-animation-timing-function:linear;animation-timing-function:linear;opacity:var(--mdc-ripple-fg-opacity,0)}to{opacity:0}}.VfPpkd-ksKsZd-XxIAqe{--mdc-ripple-fg-size:0;--mdc-ripple-left:0;--mdc-ripple-top:0;--mdc-ripple-fg-scale:1;--mdc-ripple-fg-translate-end:0;--mdc-ripple-fg-translate-start:0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity;position:relative;outline:none;overflow:hidden}.VfPpkd-ksKsZd-XxIAqe::before,.VfPpkd-ksKsZd-XxIAqe::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.VfPpkd-ksKsZd-XxIAqe::before{-webkit-transition:opacity 15ms linear,background-color 15ms linear;transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index,1)}.VfPpkd-ksKsZd-XxIAqe::after{z-index:0;z-index:var(--mdc-ripple-z-index,0)}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d::before{-webkit-transform:scale(var(--mdc-ripple-fg-scale,1));-ms-transform:scale(var(--mdc-ripple-fg-scale,1));transform:scale(var(--mdc-ripple-fg-scale,1))}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d::after{top:0;left:0;-webkit-transform:scale(0);-ms-transform:scale(0);transform:scale(0);-webkit-transform-origin:center center;-ms-transform-origin:center center;transform-origin:center center}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd::after{top:var(--mdc-ripple-top,0);left:var(--mdc-ripple-left,0)}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-Tv8l5d-lJfZMc::after{-webkit-animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards;animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-Tv8l5d-OmS1vf::after{-webkit-animation:mdc-ripple-fg-opacity-out .15s;animation:mdc-ripple-fg-opacity-out .15s;-webkit-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));-ms-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1))}.VfPpkd-ksKsZd-XxIAqe::before,.VfPpkd-ksKsZd-XxIAqe::after{top:-50%;left:-50%;width:200%;height:200%}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d::after{width:var(--mdc-ripple-fg-size,100%);height:var(--mdc-ripple-fg-size,100%)}.VfPpkd-ksKsZd-XxIAqe[data-mdc-ripple-is-unbounded],.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd{overflow:visible}.VfPpkd-ksKsZd-XxIAqe[data-mdc-ripple-is-unbounded]::before,.VfPpkd-ksKsZd-XxIAqe[data-mdc-ripple-is-unbounded]::after,.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd::before,.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd::after{top:0;left:0;width:100%;height:100%}.VfPpkd-ksKsZd-XxIAqe[data-mdc-ripple-is-unbounded].VfPpkd-ksKsZd-mWPk3d::before,.VfPpkd-ksKsZd-XxIAqe[data-mdc-ripple-is-unbounded].VfPpkd-ksKsZd-mWPk3d::after,.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd.VfPpkd-ksKsZd-mWPk3d::before,.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd.VfPpkd-ksKsZd-mWPk3d::after{top:var(--mdc-ripple-top,0);left:var(--mdc-ripple-left,0);width:var(--mdc-ripple-fg-size,100%);height:var(--mdc-ripple-fg-size,100%)}.VfPpkd-ksKsZd-XxIAqe[data-mdc-ripple-is-unbounded].VfPpkd-ksKsZd-mWPk3d::after,.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd.VfPpkd-ksKsZd-mWPk3d::after{width:var(--mdc-ripple-fg-size,100%);height:var(--mdc-ripple-fg-size,100%)}.VfPpkd-ksKsZd-XxIAqe::before,.VfPpkd-ksKsZd-XxIAqe::after{background-color:#000;background-color:var(--mdc-ripple-color,#000)}.VfPpkd-ksKsZd-XxIAqe:hover::before,.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE::before{opacity:.04;opacity:var(--mdc-ripple-hover-opacity,.04)}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe::before,.VfPpkd-ksKsZd-XxIAqe:not(.VfPpkd-ksKsZd-mWPk3d):focus::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12;opacity:var(--mdc-ripple-focus-opacity,.12)}.VfPpkd-ksKsZd-XxIAqe:not(.VfPpkd-ksKsZd-mWPk3d)::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-ksKsZd-XxIAqe:not(.VfPpkd-ksKsZd-mWPk3d):active::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12;opacity:var(--mdc-ripple-press-opacity,.12)}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0.12)}.VfPpkd-Bz112c-LgbsSe{font-size:24px;width:48px;height:48px;padding:12px}.VfPpkd-Bz112c-LgbsSe.VfPpkd-Bz112c-LgbsSe-OWXEXe-e5LLRc-SxQuSe .VfPpkd-Bz112c-Jh9lGc{width:40px;height:40px;margin-top:4px;margin-bottom:4px;margin-right:4px;margin-left:4px}.VfPpkd-Bz112c-LgbsSe.VfPpkd-Bz112c-LgbsSe-OWXEXe-e5LLRc-SxQuSe .VfPpkd-Bz112c-J1Ukfc-LhBDec{max-height:40px;max-width:40px}.VfPpkd-Bz112c-LgbsSe:disabled{color:rgba(0,0,0,.38);color:var(--mdc-theme-text-disabled-on-light,rgba(0,0,0,.38))}.VfPpkd-Bz112c-LgbsSe svg,.VfPpkd-Bz112c-LgbsSe img{width:24px;height:24px}.VfPpkd-Bz112c-LgbsSe{display:inline-block;position:relative;-webkit-box-sizing:border-box;box-sizing:border-box;border:none;outline:none;background-color:transparent;fill:currentColor;color:inherit;text-decoration:none;cursor:pointer;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;z-index:0;overflow:visible}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-RLmnJb{position:absolute;top:50%;height:48px;left:50%;width:48px;-webkit-transform:translate(-50%,-50%);-ms-transform:translate(-50%,-50%);transform:translate(-50%,-50%)}@media screen and (forced-colors:active){.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Bz112c-J1Ukfc-LhBDec,.VfPpkd-Bz112c-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Bz112c-J1Ukfc-LhBDec{display:block}}.VfPpkd-Bz112c-LgbsSe:disabled{cursor:default;pointer-events:none}.VfPpkd-Bz112c-LgbsSe[hidden]{display:none}.VfPpkd-Bz112c-LgbsSe-OWXEXe-KVuj8d-Q3DXx{-webkit-box-align:center;-webkit-align-items:center;align-items:center;display:-webkit-inline-box;display:-webkit-inline-flex;display:inline-flex;-webkit-box-pack:center;-webkit-justify-content:center;justify-content:center}.VfPpkd-Bz112c-J1Ukfc-LhBDec{pointer-events:none;border:2px solid transparent;border-radius:6px;-webkit-box-sizing:content-box;box-sizing:content-box;position:absolute;top:50%;left:50%;-webkit-transform:translate(-50%,-50%);-ms-transform:translate(-50%,-50%);transform:translate(-50%,-50%);height:100%;width:100%;display:none}@media screen and (forced-colors:active){.VfPpkd-Bz112c-J1Ukfc-LhBDec{border-color:CanvasText}}.VfPpkd-Bz112c-J1Ukfc-LhBDec::after{content:"";border:2px solid transparent;border-radius:8px;display:block;position:absolute;top:50%;left:50%;-webkit-transform:translate(-50%,-50%);-ms-transform:translate(-50%,-50%);transform:translate(-50%,-50%);height:calc(100% + 4px);width:calc(100% + 4px)}@media screen and (forced-colors:active){.VfPpkd-Bz112c-J1Ukfc-LhBDec::after{border-color:CanvasText}}.VfPpkd-Bz112c-kBDsod{display:inline-block}.VfPpkd-Bz112c-kBDsod.VfPpkd-Bz112c-kBDsod-OWXEXe-IT5dJd,.VfPpkd-Bz112c-LgbsSe-OWXEXe-IT5dJd .VfPpkd-Bz112c-kBDsod{display:none}.VfPpkd-Bz112c-LgbsSe-OWXEXe-IT5dJd .VfPpkd-Bz112c-kBDsod.VfPpkd-Bz112c-kBDsod-OWXEXe-IT5dJd{display:inline-block}.VfPpkd-Bz112c-mRLv6{height:100%;left:0;outline:none;position:absolute;top:0;width:100%}.VfPpkd-Bz112c-LgbsSe{--mdc-ripple-fg-size:0;--mdc-ripple-left:0;--mdc-ripple-top:0;--mdc-ripple-fg-scale:1;--mdc-ripple-fg-translate-end:0;--mdc-ripple-fg-translate-start:0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::before{-webkit-transition:opacity 15ms linear,background-color 15ms linear;transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index,1)}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::after{z-index:0;z-index:var(--mdc-ripple-z-index,0)}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Bz112c-Jh9lGc::before{-webkit-transform:scale(var(--mdc-ripple-fg-scale,1));-ms-transform:scale(var(--mdc-ripple-fg-scale,1));transform:scale(var(--mdc-ripple-fg-scale,1))}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Bz112c-Jh9lGc::after{top:0;left:0;-webkit-transform:scale(0);-ms-transform:scale(0);transform:scale(0);-webkit-transform-origin:center center;-ms-transform-origin:center center;transform-origin:center center}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd .VfPpkd-Bz112c-Jh9lGc::after{top:var(--mdc-ripple-top,0);left:var(--mdc-ripple-left,0)}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-Tv8l5d-lJfZMc .VfPpkd-Bz112c-Jh9lGc::after{-webkit-animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards;animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-Tv8l5d-OmS1vf .VfPpkd-Bz112c-Jh9lGc::after{-webkit-animation:mdc-ripple-fg-opacity-out .15s;animation:mdc-ripple-fg-opacity-out .15s;-webkit-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));-ms-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1))}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::after{top:0;left:0;width:100%;height:100%}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Bz112c-Jh9lGc::after{top:var(--mdc-ripple-top,0);left:var(--mdc-ripple-left,0);width:var(--mdc-ripple-fg-size,100%);height:var(--mdc-ripple-fg-size,100%)}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Bz112c-Jh9lGc::after{width:var(--mdc-ripple-fg-size,100%);height:var(--mdc-ripple-fg-size,100%)}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::after{background-color:#000;background-color:var(--mdc-ripple-color,#000)}.VfPpkd-Bz112c-LgbsSe:hover .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Bz112c-Jh9lGc::before{opacity:.04;opacity:var(--mdc-ripple-hover-opacity,.04)}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Bz112c-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12;opacity:var(--mdc-ripple-focus-opacity,.12)}.VfPpkd-Bz112c-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Bz112c-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-Bz112c-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Bz112c-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12;opacity:var(--mdc-ripple-press-opacity,.12)}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0.12)}.VfPpkd-Bz112c-LgbsSe:disabled:hover .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe:disabled.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Bz112c-Jh9lGc::before{opacity:0;opacity:var(--mdc-ripple-hover-opacity,0)}.VfPpkd-Bz112c-LgbsSe:disabled.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe:disabled:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Bz112c-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:0;opacity:var(--mdc-ripple-focus-opacity,0)}.VfPpkd-Bz112c-LgbsSe:disabled:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Bz112c-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-Bz112c-LgbsSe:disabled:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Bz112c-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:0;opacity:var(--mdc-ripple-press-opacity,0)}.VfPpkd-Bz112c-LgbsSe:disabled.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0)}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc{height:100%;left:0;pointer-events:none;position:absolute;top:0;width:100%;z-index:-1}.VfPpkd-dgl2Hf-ppHlrf-sM5MNb{display:inline}.VfPpkd-LgbsSe{position:relative;display:-webkit-inline-box;display:-webkit-inline-flex;display:inline-flex;-webkit-box-align:center;-webkit-align-items:center;align-items:center;-webkit-box-pack:center;-webkit-justify-content:center;justify-content:center;-webkit-box-sizing:border-box;box-sizing:border-box;min-width:64px;border:none;outline:none;line-height:inherit;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-appearance:none;overflow:visible;vertical-align:middle;background:transparent}.VfPpkd-LgbsSe .VfPpkd-BFbNVe-bF1uUb{width:100%;height:100%;top:0;left:0}.VfPpkd-LgbsSe::-moz-focus-inner{padding:0;border:0}.VfPpkd-LgbsSe:active{outline:none}.VfPpkd-LgbsSe:hover{cursor:pointer}.VfPpkd-LgbsSe:disabled{cursor:default;pointer-events:none}.VfPpkd-LgbsSe[hidden]{display:none}.VfPpkd-LgbsSe .VfPpkd-kBDsod{margin-left:0;margin-right:8px;display:inline-block;position:relative;vertical-align:top}[dir=rtl] .VfPpkd-LgbsSe .VfPpkd-kBDsod,.VfPpkd-LgbsSe .VfPpkd-kBDsod[dir=rtl]{margin-left:8px;margin-right:0}.VfPpkd-LgbsSe .VfPpkd-UdE5de-uDEFge{font-size:0;position:absolute;-webkit-transform:translate(-50%,-50%);-ms-transform:translate(-50%,-50%);transform:translate(-50%,-50%);top:50%;left:50%;line-height:normal}.VfPpkd-LgbsSe .VfPpkd-vQzf8d{position:relative}.VfPpkd-LgbsSe .VfPpkd-J1Ukfc-LhBDec{pointer-events:none;border:2px solid transparent;border-radius:6px;-webkit-box-sizing:content-box;box-sizing:content-box;position:absolute;top:50%;left:50%;-webkit-transform:translate(-50%,-50%);-ms-transform:translate(-50%,-50%);transform:translate(-50%,-50%);height:calc(100% + 4px);width:calc(100% + 4px);display:none}@media screen and (forced-colors:active){.VfPpkd-LgbsSe .VfPpkd-J1Ukfc-LhBDec{border-color:CanvasText}}.VfPpkd-LgbsSe .VfPpkd-J1Ukfc-LhBDec::after{content:"";border:2px solid transparent;border-radius:8px;display:block;position:absolute;top:50%;left:50%;-webkit-transform:translate(-50%,-50%);-ms-transform:translate(-50%,-50%);transform:translate(-50%,-50%);height:calc(100% + 4px);width:calc(100% + 4px)}@media screen and (forced-colors:active){.VfPpkd-LgbsSe .VfPpkd-J1Ukfc-LhBDec::after{border-color:CanvasText}}@media screen and (forced-colors:active){.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-J1Ukfc-LhBDec,.VfPpkd-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-J1Ukfc-LhBDec{display:block}}.VfPpkd-LgbsSe .VfPpkd-RLmnJb{position:absolute;top:50%;height:48px;left:0;right:0;-webkit-transform:translateY(-50%);-ms-transform:translateY(-50%);transform:translateY(-50%)}.VfPpkd-vQzf8d+.VfPpkd-kBDsod{margin-left:8px;margin-right:0}[dir=rtl] .VfPpkd-vQzf8d+.VfPpkd-kBDsod,.VfPpkd-vQzf8d+.VfPpkd-kBDsod[dir=rtl]{margin-left:0;margin-right:8px}svg.VfPpkd-kBDsod{fill:currentColor}.VfPpkd-LgbsSe-OWXEXe-dgl2Hf{margin-top:6px;margin-bottom:6px}.VfPpkd-LgbsSe{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;text-decoration:none}.VfPpkd-LgbsSe{padding:0 8px 0 8px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ{-webkit-transition:-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:box-shadow .28s cubic-bezier(.4,0,.2,1);transition:box-shadow .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);padding:0 16px 0 16px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ.VfPpkd-LgbsSe-OWXEXe-Bz112c-UbuQg{padding:0 12px 0 16px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ.VfPpkd-LgbsSe-OWXEXe-Bz112c-M1Soyc{padding:0 16px 0 12px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb{-webkit-transition:-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:box-shadow .28s cubic-bezier(.4,0,.2,1);transition:box-shadow .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);padding:0 16px 0 16px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb.VfPpkd-LgbsSe-OWXEXe-Bz112c-UbuQg{padding:0 12px 0 16px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb.VfPpkd-LgbsSe-OWXEXe-Bz112c-M1Soyc{padding:0 16px 0 12px}.VfPpkd-LgbsSe-OWXEXe-INsAgc{border-style:solid;-webkit-transition:border .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1)}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-Jh9lGc{border-style:solid;border-color:transparent}.VfPpkd-LgbsSe{--mdc-ripple-fg-size:0;--mdc-ripple-left:0;--mdc-ripple-top:0;--mdc-ripple-fg-scale:1;--mdc-ripple-fg-translate-end:0;--mdc-ripple-fg-translate-start:0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::before{-webkit-transition:opacity 15ms linear,background-color 15ms linear;transition:opacity 15ms linear,background-color 15ms linear;z-index:1}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::after{z-index:0}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Jh9lGc::before{-webkit-transform:scale(var(--mdc-ripple-fg-scale,1));-ms-transform:scale(var(--mdc-ripple-fg-scale,1));transform:scale(var(--mdc-ripple-fg-scale,1))}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Jh9lGc::after{top:0;left:0;-webkit-transform:scale(0);-ms-transform:scale(0);transform:scale(0);-webkit-transform-origin:center center;-ms-transform-origin:center center;transform-origin:center center}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd .VfPpkd-Jh9lGc::after{top:var(--mdc-ripple-top,0);left:var(--mdc-ripple-left,0)}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-Tv8l5d-lJfZMc .VfPpkd-Jh9lGc::after{-webkit-animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards;animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-Tv8l5d-OmS1vf .VfPpkd-Jh9lGc::after{-webkit-animation:mdc-ripple-fg-opacity-out .15s;animation:mdc-ripple-fg-opacity-out .15s;-webkit-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));-ms-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1))}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::after{top:-50%;left:-50%;width:200%;height:200%}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Jh9lGc::after{width:var(--mdc-ripple-fg-size,100%);height:var(--mdc-ripple-fg-size,100%)}.VfPpkd-Jh9lGc{position:absolute;-webkit-box-sizing:content-box;box-sizing:content-box;overflow:hidden;z-index:0;top:0;left:0;bottom:0;right:0}.VfPpkd-LgbsSe{font-family:Roboto,sans-serif;font-size:.875rem;letter-spacing:.0892857143em;font-weight:500;text-transform:uppercase;height:36px;border-radius:4px}.VfPpkd-LgbsSe:not(:disabled){color:#6200ee}.VfPpkd-LgbsSe:disabled{color:rgba(0,0,0,.38)}.VfPpkd-LgbsSe .VfPpkd-kBDsod{font-size:1.125rem;width:1.125rem;height:1.125rem}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::before{background-color:#6200ee}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::after{background-color:#6200ee}.VfPpkd-LgbsSe:hover .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:.04}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12}.VfPpkd-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-text-button-pressed-state-layer-opacity,0.12)}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc{border-radius:4px}.VfPpkd-LgbsSe .VfPpkd-J1Ukfc-LhBDec{border-radius:2px}.VfPpkd-LgbsSe .VfPpkd-J1Ukfc-LhBDec::after{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ{font-family:Roboto,sans-serif;font-size:.875rem;letter-spacing:.0892857143em;font-weight:500;text-transform:uppercase;height:36px;border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:not(:disabled){background-color:#6200ee}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:disabled{background-color:rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:not(:disabled){color:#fff}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:disabled{color:rgba(0,0,0,.38)}.VfPpkd-LgbsSe-OWXEXe-k8QpJ .VfPpkd-kBDsod{font-size:1.125rem;width:1.125rem;height:1.125rem}.VfPpkd-LgbsSe-OWXEXe-k8QpJ .VfPpkd-Jh9lGc::before{background-color:#fff}.VfPpkd-LgbsSe-OWXEXe-k8QpJ .VfPpkd-Jh9lGc::after{background-color:#fff}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:hover .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe-OWXEXe-k8QpJ.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:.08}.VfPpkd-LgbsSe-OWXEXe-k8QpJ.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe-OWXEXe-k8QpJ:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.24}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.24}.VfPpkd-LgbsSe-OWXEXe-k8QpJ.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-filled-button-pressed-state-layer-opacity,0.24)}.VfPpkd-LgbsSe-OWXEXe-k8QpJ .VfPpkd-Jh9lGc{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ .VfPpkd-J1Ukfc-LhBDec{border-radius:2px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ .VfPpkd-J1Ukfc-LhBDec::after{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb{font-family:Roboto,sans-serif;font-size:.875rem;letter-spacing:.0892857143em;font-weight:500;text-transform:uppercase;height:36px;border-radius:4px;-webkit-box-shadow:0 3px 1px -2px rgba(0,0,0,.2),0 2px 2px 0 rgba(0,0,0,.14),0 1px 5px 0 rgba(0,0,0,.12);box-shadow:0 3px 1px -2px rgba(0,0,0,.2),0 2px 2px 0 rgba(0,0,0,.14),0 1px 5px 0 rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(:disabled){background-color:#6200ee}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:disabled{background-color:rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(:disabled){color:#fff}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:disabled{color:rgba(0,0,0,.38)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb .VfPpkd-kBDsod{font-size:1.125rem;width:1.125rem;height:1.125rem}.VfPpkd-LgbsSe-OWXEXe-MV7yeb .VfPpkd-Jh9lGc::before{background-color:#fff}.VfPpkd-LgbsSe-OWXEXe-MV7yeb .VfPpkd-Jh9lGc::after{background-color:#fff}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:hover .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe-OWXEXe-MV7yeb.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:.08}.VfPpkd-LgbsSe-OWXEXe-MV7yeb.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.24}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.24}.VfPpkd-LgbsSe-OWXEXe-MV7yeb.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-protected-button-pressed-state-layer-opacity,0.24)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb .VfPpkd-Jh9lGc{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb .VfPpkd-J1Ukfc-LhBDec{border-radius:2px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb .VfPpkd-J1Ukfc-LhBDec::after{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe,.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(.VfPpkd-ksKsZd-mWPk3d):focus{-webkit-box-shadow:0 2px 4px -1px rgba(0,0,0,.2),0 4px 5px 0 rgba(0,0,0,.14),0 1px 10px 0 rgba(0,0,0,.12);box-shadow:0 2px 4px -1px rgba(0,0,0,.2),0 4px 5px 0 rgba(0,0,0,.14),0 1px 10px 0 rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:hover{-webkit-box-shadow:0 2px 4px -1px rgba(0,0,0,.2),0 4px 5px 0 rgba(0,0,0,.14),0 1px 10px 0 rgba(0,0,0,.12);box-shadow:0 2px 4px -1px rgba(0,0,0,.2),0 4px 5px 0 rgba(0,0,0,.14),0 1px 10px 0 rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(:disabled):active{-webkit-box-shadow:0 5px 5px -3px rgba(0,0,0,.2),0 8px 10px 1px rgba(0,0,0,.14),0 3px 14px 2px rgba(0,0,0,.12);box-shadow:0 5px 5px -3px rgba(0,0,0,.2),0 8px 10px 1px rgba(0,0,0,.14),0 3px 14px 2px rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:disabled{-webkit-box-shadow:0 0 0 0 rgba(0,0,0,.2),0 0 0 0 rgba(0,0,0,.14),0 0 0 0 rgba(0,0,0,.12);box-shadow:0 0 0 0 rgba(0,0,0,.2),0 0 0 0 rgba(0,0,0,.14),0 0 0 0 rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-INsAgc{font-family:Roboto,sans-serif;font-size:.875rem;letter-spacing:.0892857143em;font-weight:500;text-transform:uppercase;height:36px;border-radius:4px;padding:0 15px 0 15px;border-width:1px}.VfPpkd-LgbsSe-OWXEXe-INsAgc:not(:disabled){color:#6200ee}.VfPpkd-LgbsSe-OWXEXe-INsAgc:disabled{color:rgba(0,0,0,.38)}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-kBDsod{font-size:1.125rem;width:1.125rem;height:1.125rem}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-Jh9lGc::before{background-color:#6200ee}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-Jh9lGc::after{background-color:#6200ee}.VfPpkd-LgbsSe-OWXEXe-INsAgc:hover .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe-OWXEXe-INsAgc.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:.04}.VfPpkd-LgbsSe-OWXEXe-INsAgc.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe-OWXEXe-INsAgc:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12}.VfPpkd-LgbsSe-OWXEXe-INsAgc:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-LgbsSe-OWXEXe-INsAgc:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12}.VfPpkd-LgbsSe-OWXEXe-INsAgc.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-outlined-button-pressed-state-layer-opacity,0.12)}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-Jh9lGc{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-J1Ukfc-LhBDec{border-radius:2px}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-J1Ukfc-LhBDec::after{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-INsAgc:not(:disabled){border-color:rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-INsAgc:disabled{border-color:rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-INsAgc.VfPpkd-LgbsSe-OWXEXe-Bz112c-UbuQg{padding:0 11px 0 15px}.VfPpkd-LgbsSe-OWXEXe-INsAgc.VfPpkd-LgbsSe-OWXEXe-Bz112c-M1Soyc{padding:0 15px 0 11px}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-Jh9lGc{top:-1px;left:-1px;bottom:-1px;right:-1px;border-width:1px}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-RLmnJb{left:-1px;width:calc(100% + 2px)}.nCP5yc{font-family:"Google Sans",Roboto,Arial,sans-serif;font-size:.875rem;letter-spacing:.0107142857em;font-weight:500;text-transform:none;-webkit-transition:border .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),box-shadow .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);-webkit-box-shadow:none;box-shadow:none}.nCP5yc .VfPpkd-Jh9lGc{height:100%;position:absolute;overflow:hidden;width:100%;z-index:0}.nCP5yc:not(:disabled){background-color:rgb(26,115,232);background-color:var(--gm-fillbutton-container-color,rgb(26,115,232))}.nCP5yc:not(:disabled){color:#fff;color:var(--gm-fillbutton-ink-color,#fff)}.nCP5yc:disabled{background-color:rgba(60,64,67,.12);background-color:var(--gm-fillbutton-disabled-container-color,rgba(60,64,67,.12))}.nCP5yc:disabled{color:rgba(60,64,67,.38);color:var(--gm-fillbutton-disabled-ink-color,rgba(60,64,67,.38))}.nCP5yc .VfPpkd-Jh9lGc::before,.nCP5yc .VfPpkd-Jh9lGc::after{background-color:rgb(32,33,36);background-color:var(--gm-fillbutton-state-color,rgb(32,33,36))}.nCP5yc:hover .VfPpkd-Jh9lGc::before,.nCP5yc.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:.16;opacity:var(--mdc-ripple-hover-opacity,.16)}.nCP5yc.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.nCP5yc:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.24;opacity:var(--mdc-ripple-focus-opacity,.24)}.nCP5yc:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.nCP5yc:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.2;opacity:var(--mdc-ripple-press-opacity,.2)}.nCP5yc.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0.2)}.nCP5yc .VfPpkd-BFbNVe-bF1uUb{opacity:0}.nCP5yc .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-uI4vCe-LkdAo,.nCP5yc .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-IdXvz-LkdAo-Bd00G{stroke:#fff}@media (-ms-high-contrast:active),screen and (forced-colors:active){.nCP5yc .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-uI4vCe-LkdAo,.nCP5yc .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-IdXvz-LkdAo-Bd00G{stroke:CanvasText}}.nCP5yc:hover{-webkit-box-shadow:0 1px 2px 0 rgba(60,64,67,.3),0 1px 3px 1px rgba(60,64,67,.15);box-shadow:0 1px 2px 0 rgba(60,64,67,.3),0 1px 3px 1px rgba(60,64,67,.15);-webkit-box-shadow:0 1px 2px 0 var(--gm-fillbutton-keyshadow-color,rgba(60,64,67,.3)),0 1px 3px 1px var(--gm-fillbutton-ambientshadow-color,rgba(60,64,67,.15));box-shadow:0 1px 2px 0 var(--gm-fillbutton-keyshadow-color,rgba(60,64,67,.3)),0 1px 3px 1px var(--gm-fillbutton-ambientshadow-color,rgba(60,64,67,.15))}.nCP5yc:hover .VfPpkd-BFbNVe-bF1uUb{opacity:0}.nCP5yc:active{-webkit-box-shadow:0 1px 2px 0 rgba(60,64,67,.3),0 2px 6px 2px rgba(60,64,67,.15);box-shadow:0 1px 2px 0 rgba(60,64,67,.3),0 2px 6px 2px rgba(60,64,67,.15);-webkit-box-shadow:0 1px 2px 0 var(--gm-fillbutton-keyshadow-color,rgba(60,64,67,.3)),0 2px 6px 2px var(--gm-fillbutton-ambientshadow-color,rgba(60,64,67,.15));box-shadow:0 1px 2px 0 var(--gm-fillbutton-keyshadow-color,rgba(60,64,67,.3)),0 2px 6px 2px var(--gm-fillbutton-ambientshadow-color,rgba(60,64,67,.15))}.nCP5yc:active .VfPpkd-BFbNVe-bF1uUb{opacity:0}.nCP5yc:disabled{-webkit-box-shadow:none;box-shadow:none}.nCP5yc:disabled:hover .VfPpkd-Jh9lGc::before,.nCP5yc:disabled.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:0;opacity:var(--mdc-ripple-hover-opacity,0)}.nCP5yc:disabled.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.nCP5yc:disabled:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:0;opacity:var(--mdc-ripple-focus-opacity,0)}.nCP5yc:disabled:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.nCP5yc:disabled:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:0;opacity:var(--mdc-ripple-press-opacity,0)}.nCP5yc:disabled.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0)}.nCP5yc:disabled .VfPpkd-BFbNVe-bF1uUb{opacity:0}.Rj2Mlf{font-family:"Google Sans",Roboto,Arial,sans-serif;font-size:.875rem;letter-spacing:.0107142857em;font-weight:500;text-transform:none;-webkit-transition:border .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),box-shadow .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);-webkit-box-shadow:none;box-shadow:none}.Rj2Mlf .VfPpkd-Jh9lGc{height:100%;position:absolute;overflow:hidden;width:100%;z-index:0}.Rj2Mlf:not(:disabled){color:rgb(26,115,232);color:var(--gm-hairlinebutton-ink-color,rgb(26,115,232))}.Rj2Mlf:not(:disabled){border-color:rgb(218,220,224);border-color:var(--gm-hairlinebutton-outline-color,rgb(218,220,224))}.Rj2Mlf:not(:disabled):hover{border-color:rgb(218,220,224);border-color:var(--gm-hairlinebutton-outline-color,rgb(218,220,224))}.Rj2Mlf:not(:disabled).VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe,.Rj2Mlf:not(:disabled):not(.VfPpkd-ksKsZd-mWPk3d):focus{border-color:rgb(23,78,166);border-color:var(--gm-hairlinebutton-outline-color--stateful,rgb(23,78,166))}.Rj2Mlf:not(:disabled):active,.Rj2Mlf:not(:disabled):focus:active{border-color:rgb(218,220,224);border-color:var(--gm-hairlinebutton-outline-color,rgb(218,220,224))}.Rj2Mlf:disabled{color:rgba(60,64,67,.38);color:var(--gm-hairlinebutton-disabled-ink-color,rgba(60,64,67,.38))}.Rj2Mlf:disabled{border-color:rgba(60,64,67,.12);border-color:var(--gm-hairlinebutton-disabled-outline-color,rgba(60,64,67,.12))}.Rj2Mlf:hover:not(:disabled),.Rj2Mlf.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe:not(:disabled),.Rj2Mlf:not(.VfPpkd-ksKsZd-mWPk3d):focus:not(:disabled),.Rj2Mlf:active:not(:disabled){color:rgb(23,78,166);color:var(--gm-hairlinebutton-ink-color--stateful,rgb(23,78,166))}.Rj2Mlf .VfPpkd-BFbNVe-bF1uUb{opacity:0}.Rj2Mlf .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-uI4vCe-LkdAo,.Rj2Mlf .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-IdXvz-LkdAo-Bd00G{stroke:rgb(26,115,232)}@media (-ms-high-contrast:active),screen and (forced-colors:active){.Rj2Mlf .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-uI4vCe-LkdAo,.Rj2Mlf .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-IdXvz-LkdAo-Bd00G{stroke:CanvasText}}.Rj2Mlf .VfPpkd-Jh9lGc::before,.Rj2Mlf .VfPpkd-Jh9lGc::after{background-color:rgb(26,115,232);background-color:var(--gm-hairlinebutton-state-color,rgb(26,115,232))}.Rj2Mlf:hover .VfPpkd-Jh9lGc::before,.Rj2Mlf.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:.04;opacity:var(--mdc-ripple-hover-opacity,.04)}.Rj2Mlf.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.Rj2Mlf:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12;opacity:var(--mdc-ripple-focus-opacity,.12)}.Rj2Mlf:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.Rj2Mlf:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12;opacity:var(--mdc-ripple-press-opacity,.12)}.Rj2Mlf.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0.12)}.Rj2Mlf:disabled:hover .VfPpkd-Jh9lGc::before,.Rj2Mlf:disabled.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:0;opacity:var(--mdc-ripple-hover-opacity,0)}.Rj2Mlf:disabled.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.Rj2Mlf:disabled:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:0;opacity:var(--mdc-ripple-focus-opacity,0)}.Rj2Mlf:disabled:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.Rj2Mlf:disabled:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:0;opacity:var(--mdc-ripple-press-opacity,0)}.Rj2Mlf:disabled.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0)}.b9hyVd{font-family:"Google Sans",Roboto,Arial,sans-serif;font-size:.875rem;letter-spacing:.0107142857em;font-weight:500;text-transform:none;-webkit-transition:border .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),box-shadow .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);border-width:0;-webkit-box-shadow:0 1px 2px 0 rgba(60,64,67,.3),0 1px 3px 1px rgba(60,64,67,.15);box-shadow:0 1px 2px 0 rgba(60,64,67,.3),0 1px 3px 1px rgba(60,64,67,.15);-webkit-box-shadow:0 1px 2px 0 var(--gm-protectedbutton-keyshadow-color,rgba(60,64,67,.3)),0 1px 3px 1px var(--gm-protectedbutton-ambientshadow-color,rgba(60,64,67,.15));box-shadow:0 1px 2px 0 var(--gm-protectedbutton-keyshadow-color,rgba(60,64,67,.3)),0 1px 3px 1px var(--gm-protectedbutton-ambientshadow-color,rgba(60,64,67,.15))}.b9hyVd .VfPpkd-Jh9lGc{height:100%;position:absolute;overflow:hidden;width:100%;z-index:0}.b9hyVd:not(:disabled){background-color:#fff;background-color:var(--gm-protectedbutton-container-color,#fff)}.b9hyVd:not(:disabled){color:rgb(26,115,232);color:var(--gm-protectedbutton-ink-color,rgb(26,115,232))}.b9hyVd:disabled{background-color:rgba(60,64,67,.12);background-color:var(--gm-protectedbutton-disabled-container-color,rgba(60,64,67,.12))}.b9hyVd:disabled{color:rgba(60,64,67,.38);color:var(--gm-protectedbutton-disabled-ink-color,rgba(60,64,67,.38))}.b9hyVd:hover:not(:disabled),.b9hyVd.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe:not(:disabled),.b9hyVd:not(.VfPpkd-ksKsZd-mWPk3d):focus:not(:disabled),.b9hyVd:active:not(:disabled){color:rgb(23,78,166);color:var(--gm-protectedbutton-ink-color--stateful,rgb(23,78,166))}.b9hyVd .VfPpkd-BFbNVe-bF1uUb{opacity:0}.b9hyVd .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-uI4vCe-LkdAo,.b9hyVd .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-IdXvz-LkdAo-Bd00G{stroke:rgb(26,115,232)}@media (-ms-high-contrast:active),screen and (forced-colors:active){.b9hyVd .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-uI4vCe-LkdAo,.b9hyVd .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-IdXvz-LkdAo-B
2026-01-13T09:29:20
https://sre.google/prodcast/
Google SRE - Google podcast on site reliability engineering Site Reliability Engineering Jump to Content Home Resources Latest resources Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Distributed PubSub Distributed Image Server The Art of SLO Latest resources Resources overview Google SRE Video Gallery New! Product-Focused Reliability for SRE Twentieth Anniversary Twenty years of SRE lessons learned Prodverbs SRE Fundamentals Measuring Reliability Why Heroism is Bad System Theoretic Process Analysis Books Books overview Building Secure & Reliable Systems The Site Reliability Workbook Site Reliability Engineering Mobaa Mobaa overview 2024 Gallery 2022 Gallery 2020 Gallery Vector Methods Classroom Classroom overview Distributed PubSub Distributed Image Server The Art of SLO Books Careers Cloud Local Prodcast Spotlight Site Reliability Engineering Jump to Content SRE Prodcast Prodcast is Google's podcast about Site Reliability Engineering and production software. Season 1: SRE Fundamentals Season 2: Life of an SRE Season 3: Champions of the Internet Season 4: Friends and Trends Season 5: More Friends, More Trends Season 1: SRE Fundamentals Season 1 Discusses concepts from the SRE Book with experts at Google. Season 1, Episode 9 Postmortems with Ayelet Sachto Ayelet Sachto offers advice on creating an actionable, transparent, and blameless postmortem culture. View transcript View HTML transcript View PDF transcript Further reading SRE Book Chapter 15 - Postmortem Culture: Learning from Failure Anatomy of An Incident Postmortem Action Items: Plan the Work and Work the Plan Shrinking the impact of production incidents using SRE principles—CRE Life Lessons Season 1, Episode 8 Incident Management with Adrienne Walcer Adrienne Walcer discusses how to approach and organize incident management efforts throughout the production lifecycle. View transcript View HTML transcript View PDF transcript Further reading SRE Book Chapter 13 - Emergency Response SRE Book Chapter 14 - Managing Incidents SRE Book Chapter 16 - Tracking Outages Anatomy of an Incident Season 1, Episode 7 On-Call Rotations with Andrew Widdowson (APW) Andrew Widdowson (APW) shares strategies for successful on-call rotations. View transcript View HTML transcript View PDF transcript Further reading SRE Book Chapter 11 - Being On-Call Season 1, Episode 6 Automation with Pierre Palatin Pierre Palatin dives into different automation strategies, how to build confidence in your system, and why designing the UI may be your biggest challenge. View transcript View HTML transcript View PDF transcript Further reading SRE Book Chapter 7 - The Evolution of Automation at Google SRE Book Chapter 8 - Release Engineering Prodspec and Annealing | USENIX White Paper xkcd: Automation Season 1, Episode 5 Client-Transparent Migrations with Pavan Adharapurapu Pavan Adharapurapu details how to approach large-scale migrations while optimizing for user experience. View transcript View HTML transcript View PDF transcript Further reading SRE Book Chapter 17 - Testing for Reliability SRE Book Chapter 27 - Reliable Product Launches at Scale Season 1, Episode 4 Rethinking SLOs with Narayan Desai Narayan Desai explains why SLOs can be problematic and proposes alternative methods for monitoring complex, large-scale systems. View transcript View HTML transcript View PDF transcript Further reading SRE Book Chapter 4 - Service Level Objectives Season 1, Episode 3 Alerting with Amelia Harrison Amelia Harrison advises on when and how to alert, ideal coverage, and tuning. View transcript View HTML transcript View PDF transcript Further reading SRE Book Chapter 10 - Practical Alerting Season 1, Episode 2 Customer-Centric Monitoring with Silvia Esparrachiari Silvia Esparrachiari talks about the challenges of monitoring and the importance of understanding your users. View transcript View HTML transcript View PDF transcript Further reading SRE Book Chapter 6 - Monitoring Distributed Systems Season 1, Episode 1 SRE Philosophy with Jennifer Mace (Macey) What is SRE, anyway? Jennifer Mace (Macey) gives us her definition of "site reliability engineer," discusses how to manage risk, and shares key questions to ask developers. View transcript View HTML transcript View PDF transcript Further reading SRE Book Chapter 3 - Embracing Risk SRE Book Chapter 9 - Simplicity SRE Book Chapter 5 - Toil Generic Mitigations Multi-Single Tenancy Season 1, Episode 0 Creating the SRE Prodcast with John Reese (JTR) Host MP English and former Google SRE John Reese (JTR) chat about the creation of the Prodcast. View transcript View HTML transcript View PDF transcript Further reading Prodcast Season 1 Forward Season 2: Life of an SRE Season 2 " Life of An SRE ", examines the career path and growth of individuals in SRE. Season 2, Episode 8 Life of An SRE: Beyond Google Former Google SREs, or “Xooglers”, talk with hosts MP and Steve McGhee about site reliability engineering outside of Google. What’s the difference in scale? What skills are generally valuable? And why can’t you build “SRE in a box” that jump-starts pretty much any organization? View transcript View HTML transcript View PDF transcript Further reading Enterprise Roadmap to SRE - Google - Site Reliability Engineering What SRE Could Be: Systems Reliability Engineering Thinking in Systems Season 2, Episode 7 Life of An SRE with Sabrina Farmer Sabrina Farmer, VP of Engineering at Google, talks about her career journey through Site Reliability Engineering. What does management mean? What’s involved in being an effective manager? and what’s a feasibility study? Hear some great advice on how to get what you expect out of a role, wherever on the ladder it is. View transcript View HTML transcript View PDF transcript Season 2, Episode 6 Life of An SRE with Dave Reisner Dave Reisner talks about his path to Staff SRE, from ArchLinux contributor through DevOps to software engineer. This episode emphasizes the value of strong mentoring and manager relationships, and the challenges of work-life balance. View transcript View HTML transcript View PDF transcript Further reading A Case Study in Community-Driven Software Adoption Season 2, Episode 5 Life of An SRE with Stephen Benjamin Explore the role and responsibilities of an SRE manager with Stephen Benjamin. View transcript View HTML transcript View PDF transcript Further reading SRE as a team sport – O’Reilly Developing a Google SRE Culture | Coursera Season 2, Episode 4 Life of An SRE with Jessica Theodat Explore the role and responsibilities of a Senior SRE with Jessica Theodat, as she discusses life-work balance, the value of mentoring, and being a Black woman in SRE. View transcript View HTML transcript View PDF transcript Season 2, Episode 3 Life of An SRE with Shannon Brady and Theo Klein Explore the career paths of SREs Shannon Brady and Theo Klein, as they discuss their paths to Site Reliability Engineering and finding their areas of expertise. View transcript View HTML transcript View PDF transcript Further reading SRE Book Update: Postmortem Culture Season 2, Episode 2 Life of An SRE with Mariuxi Vasconez and Julian Alarcon In this episode, Mariuxi and Julian discuss their paths to SRE: what drew them initially to SRE, and what motivates them to continue developing skills View transcript View HTML transcript View PDF transcript Further reading Introducing Non-Abstract Large System Design, Google SRE Book Season 2, Episode 1 Life of An SRE with Tom Cranitch and Megan Yin How does one become an SRE? And what’s the career like? In this episode, Tom and Megan discuss their path to SRE. View transcript View HTML transcript View PDF transcript Further reading Invent More, Toil Less Postmortem Culture: Learning from Failure, Google SRE Book Training Site Reliability Engineers Season 3: Champions of the Internet Season 3 " Champions of the Internet ", discusses software systems designed and built by SRE. Season 3, Episode 14 Special Episode: You Missed a Page from Telebot This episode features Javi Beltran , a Google engineering lead who created the "Telebot" theme song. With our beloved hosts, Steve McGhee and Jordan Greenberg , Beltran discusses the origins of the song, created in 2012 for Google's paging system. The song was meant to add a touch of levity to what could be a stressful situation for engineers on-call. Beltran also unveils a new, more modern remix of “Telebot” (created in collaboration with our host, Jordan Greenberg!) which will be used as the intro theme for the podcast's next season. View transcript View HTML transcript View PDF transcript Further reading Chapter 11 - Being On-Call Season 3, Episode 13 Imperative vs. Declarative Change Workflows with Dominic Hutton & Niccolo' Cascarano In this episode of the Prodcast, guests Dominic Hutton (Staff SRE, HashiCorp) and Niccolo' Cascarano (Senior Staff SRE at Google) join hosts Steve McGhee and Jordan Greenberg to dive into configurations. They discuss the differences between imperative and declarative configuration, explore the benefits and challenges of each approach, and the need for careful consideration when choosing between the two. Ultimately, the goal is to achieve reliable and maintainable systems through effective configuration management. View transcript View HTML transcript View PDF transcript Further reading Prodspec and Annealing Dominic's Blog Season 3, Episode 12 Human Factors in Complex Systems with Casey Rosenthal and John Allspaw This episode features Casey Rosenthal (Founder, Cirrusly.ai) and John Allspaw (Founder and Principal, Adaptive Capacity Labs), joining our hosts Steve McGhee and Jordan Greenberg . Together they discuss how resilience appears in Software Engineering and SRE and explore the importance of understanding the human factors involved in adapting to system failures—highlighting the need for a more qualitative and holistic approach to understanding how engineers successfully adapt to system behavior and improving overall reliability. View transcript View HTML transcript View PDF transcript Further reading Seeking SRE: Conversations about Running Production Systems at Scale What Is Incident Severity, but a Lie Agreed Upon? Season 3, Episode 11 Embracing Complexity with Christina Schulman & Dr. Laura Maguire In this episode of the Prodcast, we are joined by guests Christina Schulman (Staff SRE, Google) and Dr. Laura Maguire (Principal Engineer, Trace Cognitive Engineering). They emphasize the human element of SRE and the importance of fostering a culture of collaboration, learning, and resilience in managing complex systems. They touch upon topics such as the need for diverse perspectives and collaboration in incident response, the necessity of embracing complexity, and explore concepts such as aerodynamic stability, and more. View transcript View HTML transcript View PDF transcript Further reading Embracing Risk Season 3, Episode 10 Maglev: load balancing at Google with Cody Smith and Trisha Weir In this episode, Cody Smith (CTO and Co-founder, Camus Energy) & Trisha Weir (SRE Department Lead, Google) join hosts Steve McGhee and Jordan Greenberg , to discuss their experience developing Maglev , a highly available and distributed network load balancer (NLB) that is an integral part of the cloud architecture that manages traffic that comes in to a datacenter. Starting with Maglev’s humble beginnings as a skunkworks effort, Cody and Trisha recount the challenges they faced, and emphasize the importance of psychological safety, collaboration, and adaptability in SRE innovation. View transcript View HTML transcript View PDF transcript Further reading Maglev: A Fast and Reliable Software Network Load Balancer Google shares software network load balancer design powering GCP networking Season 3, Episode 9 Profiling data with Pat Somaru and Narayan Desai In this episode, guests Narayan Desai (Principal SRE, Google) and Pat Somaru (Senior Production Engineer, Meta) join hosts Steve McGhee and Florian Rathgeber to discuss the challenges of observability and working with profiling data. The discussion covers intriguing topics like noise reduction, workload modeling, and the need for better tools and techniques to handle high-cardinality data. View transcript View HTML transcript View PDF transcript Further reading Sto: A Better Way to Store and Query Profiler Data Principled Performance Analytics YourKit Season 3, Episode 8 Google Public DNS (8.8.8.8) with Wilmer van der Gaast and Andy Sykes This episode features Google engineers Wilmer van der Gaast (Production on-tall) and Andy Sykes (Senior Staff Systems Engineer, SRE), joining hosts Steve McGhee and Jordan Greenberg , to discuss the development and maintenance of Google Public DNS (8.8.8.8). They highlight the initial motivations for creating the service, technical challenges like cache poisoning and load balancing, as well as the collaborative effort between SRE and SWE teams to address these issues. They also reflect on the evolving nature of SRE and advice for aspiring SREs. View transcript View HTML transcript View PDF transcript Further reading An Illustrated Guide to the Kaminsky DNS Vulnerability Season 3, Episode 7 SRE in the Retail and Gaming Worlds with Jordan Chernev & Scott Bowers Guests Jordan Chernev (Senior Technology Executive) and Scott Bowers (SRE, Gearbox Software) who hail from the retail and gaming industries, respectively, join hosts Steve McGhee and Jordan Greenberg to discuss the unique challenges of Site Reliability Engineering in their industries. They share the importance of aligning SLOs with user experience, strategies for handling spikes in traffic, communicating with users during outages, and investing in reliability. View transcript View HTML transcript View PDF transcript Season 3, Episode 6 Incident Response with Sarah Butt and Vrai Stacey Sarah Butt (Principal Engineer, Centralized Incident Response, Salesforce) and Vrai Stacey (Staff Software Engineer, Google) join hosts Steve McGhee and Jordan Greenberg to dive into incident response—particularly tooling and software for reliability incidents. Tune in for an in-depth discussion on topics such as the importance of communication and collaboration during incidents, and the role of tooling in supporting incident response processes. Sarah and Vrai also share personal takeaways from incidents they have experienced. View transcript View HTML transcript View PDF transcript Season 3, Episode 5 Building Reliable Systems with Silvia Botros and Niall Murphy Silvia Botros (SRE Architect, Twilio | Author of "High Performance MySQL, 4th edition") and Niall Murphy (Co-founder & CEO, Stanza) join hosts Steve McGhee and Jordan Greenberg, to discuss cultural shifts in database engineering, rate limiting, load shedding, holistic approaches to reliability, proactive measures to build customer trust, and much more! View transcript View HTML transcript View PDF transcript Further reading High Performance MySQL, 4th edition The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations Season 3, Episode 4 Creating Systems that are Safe with Liz Fong-Jones Liz Fong-Jones (former Google SRE and current Field CTO at honeycomb.io) joins hosts Steve McGhee and Jordan Greenberg for a lively discussion centered around observability, its evolution from monitoring, and its role in modern software development. Tune in for more on: the importance of observability as a spectrum, the evolving role of SREs, and advice to aspiring software engineers. View transcript View HTML transcript View PDF transcript Season 3, Episode 3 Production Problems Are For All! with Ben Treynor Sloss Ben Treynor Sloss (VP of Engineering, Google) joins hosts Steve McGhee and Dr. Jennifer Petoff (Director of Technical Infrastructure Education, Google) to share the evolution of SRE and its impact on software development, how AI and ML significantly impacts SRE practices, and the future of SRE. Ben coined the term "Site Reliability Engineering" for his team of (now) 4,000 software engineers, engaged in what were traditionally operations functions. Under Ben's leadership, Google SRE wrote two best-selling books on SRE. Since then, the rest of the SaaS industry has come to adopt the SRE name, mission, and practices. View transcript View HTML transcript View PDF transcript Season 3, Episode 2 There Remains a Huge Amount of Work to Do, with Healfdene Goguen In this episode, Healfdene Goguen (Principal Engineer, Google) joins hosts Steve McGhee and Jordan Greenberg to discuss the vast amount of work to be done by SREs, and the fascinating challenges to tackle with clear real-world implications. It's a truly exciting time to be an SRE at Google! View transcript View HTML transcript View PDF transcript Season 3, Episode 1 SRE, a Basis of Influence with Amy Tobey & Vladyslav Ukis In this season of Google Prodcast, current and former SREs, both within and outside of Google, chat with hosts Steve McGhee and Jordan Greenberg to discuss software systems designed and built by SREs. For "episode zero", guests Amy Tobey and Vladyslav Ukis will set the stage for the season with a lively discussion about what Software Engineering means to Site Reliability Engineering. View transcript View HTML transcript View PDF transcript Season 4: Friends and Trends Season 4 is about SRE " Friends and Trends ", We discuss what's coming up in the SRE space, from new technology to modernizing processes and more, as well as the friends we make along the way. Season 4, Episode 10 The One with Ben Good and Our Kubernetes Friends In this special episode hosts Steve McGhee from the Google SRE Prodcast and Kaslin Fields from the Google Kubernetes Podcast, welcome Google Cloud Solutions Architect Ben Good to discuss platform engineering. Listeners can look forward to hearing about the role of Kubernetes as a tool for building platforms, how to create "golden paths" for developers, and the importance of observability and self-service in platform design. The conversation also touches on industry trends, the bespoke nature of platforms, and how DORA metrics can be applied to platform engineering practices. View transcript View HTML transcript View PDF transcript Further reading Deployment Archetypes for Cloud Applications Season 4, Episode 9 The One With AI Agents, Ramón Llamas, and Swapnil Haria Google Staff SRE Ramón Llamas and Google Software Engineer Swapnil Haria join our hosts to explore how AI agents are revolutionizing production management, from summarizing alerts and finding hidden errors to proactively preventing outages. Learn about the challenges of evaluating non-deterministic systems and the fascinating interplay between human expertise and emerging AI capabilities in ensuring robust and reliable infrastructure. View transcript View HTML transcript View PDF transcript Further reading FLASH: A Reliable Workflow Automation Agent LLexus: an AI agent system for incident management Season 4, Episode 8 The One with Technical Program Managers and Karanveer Anand This episode features Google Technical Program Manager (TPM) Karanveer Anand, who joins our hosts to discuss the unique role of TPMs in Site Reliability Engineering (SRE). The conversation highlights how SRE TPMs bridge the gap between technical details and business impact, managing complex projects with inter-team dependencies and ensuring system reliability, particularly in the rapidly evolving AI landscape. View transcript View HTML transcript View PDF transcript Further reading 10 Years of Crashing Google Project management à la SRE: How to juggle the needs of your project and production Season 4, Episode 7 The One with STPA, Jeffrey Snover, and Theo Klein This episode discusses Systems Theoretic Process Analysis (STPA), a method for analyzing complex systems. Theo Klein , a Google SRE, and Jeffrey Snover , a Distinguished Engineer at Google, explain that STPA focuses on identifying how system accidents and losses occur due to a loss of control, rather than component failures. STPA helps identify design flaws early, even before code is written! The discussion highlights that STPA is a human-driven process, prompting critical questions about system goals and potential losses, and that Google is adapting the pure STPA approach for commercial software development to make it more practical and efficient. View transcript View HTML transcript View PDF transcript Further reading STPA (System Theoretic Process Analysis) at Google The Evolution of SRE at Google Ten Machine Requirements To Satisfy Essentials Of Joint Activity Mapping a Better Future with STPA STAMP Workshop The STAMP Institute MIT STAMP Workshop Tutorials How Complex Systems Fail CAST Tutorial Season 4, Episode 6 The One with Startups and Adam Fletcher In this episode, hosts Steve McGhee and Matt Siegler are joined by guest, Adam Fletcher, CEO and Co-Founder of MarketStreet. They discuss the current state of web development with LLMs, managing technical debt in startups, the evolution of infrastructure and reliability engineering, the role of community in technology, and the future of software engineering with AI. View transcript View HTML transcript View PDF transcript Further reading Case Studies in Infrastructure Change Management Life of an Airline Flight: What Systems Get You From Here to There via the Air -- Adam Fletcher Season 4, Episode 5 The One With SLOs and Sal Furino In this episode, Sal Furino , Customer Reliability Engineer at Bloomberg, discusses all things Service Level Objectives (SLOs) with hosts Steve McGhee and Matt Siegler . Together, they dig into what successful SLOs look like, how it relates to users, and how SLOs provide an effective framework for joint decisions about system reliability across product, engineering, and leadership teams. View transcript View HTML transcript View PDF transcript Further reading Graceful Degradation and SLOs by Niall Murphy (re: LLM mimicking responses) Implementing Service Level Objectives by Alex Hidalgo 9 SLIs; OH MY! SRE CON23 EMEA: 9 Things you should do when starting to use SLOs Fred Moyer's "error budgets as a sentence" at Monitorama 2022 SLO Development Lifecycle R9Y.dev reliability map Platform Engineering New York Meetup group Sal's LinkedIn Season 4, Episode 4 The One With the Future of SRE and Matt Zelesko Matt Zelesko , the head of Site Reliability Engineering at Google, discusses the evolution of SRE, highlighting the shift from traditional operations to a model that balances velocity and reliability to better serve the rapid advancements in AI and ML. He emphasizes that SRE's core mission is to enable partners to move quickly while meeting reliability goals, and that the sheer scale of Google's infrastructure necessitates the SRE model for cross-system problem-solving. Zelesko envisions AI as a crucial assistant for SREs, improving incident detection, mitigation, and postmortem processes, and allowing SREs to focus on more complex engineering challenges and risk management earlier in the development cycle, while still valuing the hands-on experience of operating production infrastructure. View transcript View HTML transcript View PDF transcript Further reading Chapter 12 from the SRE book: Non-Abstract Large System Design Chapter 23 from the SRE book: Managing Critical State Complexities of Capacity Management for Distributed Services Season 4, Episode 3 The One With AI and Todd Underwood In this Google Prodcast episode, Todd Underwood , a reliability expert from Anthropic with experience at Google and OpenAI, discusses the current state and future of AI in SRE. Todd and the hosts focus on the current state and future of AI and ML in production, particularly for SREs. Topics discussed include the challenges of AI-Ops, limitations of current anomaly detection, the potential for AI in config authoring and troubleshooting, trade-offs between product velocity and reliability, the evolving role of SREs in an AI-driven world, and book publication for optimal timing. View transcript View HTML transcript View PDF transcript Further reading ML for Operations: Pitfalls, Dead Ends, and Hope AIOps: Prove It! An Open Letter to Vendors Selling AI for SREs Season 4, Episode 2 The One With Data Centers and Peter Pellerzi This episode features guest, Peter Pellerzi (Distinguished Engineer, Google). Peter and the hosts, Matt Siegler and Steve McGhee, focus on the physical infrastructure side of SRE, discussing topics such as the scale of Google's data centers, handling incidents like power outages, testing and preparedness strategies, the use of AI for optimizing cooling plants, and more. Peter also emphasizes the importance of community support, proactive planning, and learning from real-world testing and incidents to ensure high availability and resilience in data center operations. View transcript View HTML transcript View PDF transcript Further reading DeepMind AI Reduces Google Data Centre Cooling Bill by 40% Hear how data centers change the world around them Season 4, Episode 1 The One With Security and Jessica Theodat Jessica Theodat (Senior SRE & Security Tech Lead, Google) joins hosts Jordan Greenberg and Steve McGhee to discuss the intersection of security and site reliability engineering at Google. Jessica touches on risk management, the unique nature of security incident responses, and the shared goals between security and SRE. The crew also delves into the balance between security and SRE, acknowledging the tension and the need for collaboration between teams to achieve business goals and user trust. View transcript View HTML transcript View PDF transcript Season 4, Episode 0 We’re back with Season 4! In this episode, hosts and producers of Prodcast (including our new co-host, Matt Siegler !) reflect on the previous season and introduce the new season's focus on upcoming trends in Site Reliability Engineering (SRE) and AI, and the friends we make along the way. They also introduce new elements we are bringing in with Season 4, such as video format and a feedback form. View transcript View HTML transcript View PDF transcript Season 5: More Friends, More Trends Season 5, Episode 2 The One With SLOs In this episode, we welcome Alex Hidalgo and Brian Singer of nobl9 to discuss Service Level Objectives (SLOs). Alex and Brian talk about how SLOs can establish a vernacular across industry verticals, leading to constructive conversations and a shared understanding of how to implement SRE practices. Join us for a lively discussion that ranges across SLO topics! View transcript View HTML transcript View PDF transcript Further reading Building good SLOs—CRE life lessons SLO Engineering Case Studies Season 5, Episode 1 The One With Stephanie Hippo and Observability In this episode, Steph Hippo, Platform Engineering Director at Honeycomb, joins The Prodcast to discuss AI and SRE. Steph explains how observability helps us understand complex systems from their outputs, and provides a foundation for SRE to respond to system problems. This episode explains how AI and observability build a self-reinforcing loop. We also discuss how AI can detect and respond to certain classes of incidents, leading to self-healing systems and allowing SREs to focus on novel and interesting problems. She advises small businesses adopting AI to learn from others' mistakes (post-mortems) and to commit time and budget to experimentation. View transcript View HTML transcript View PDF transcript Meet your Hosts MP English Systems Engineer Seasons 1 & 2 Steve McGhee Reliability Advocate, SRE Seasons 2+ Jordan Greenberg Engineering Program Manager, GCP Seasons 3+ Matthew Siegler Machine Learning Infrastructure SRE Seasons 4+ Florian Rathgeber Site Reliability Engineer, GCP Seasons 3+ Meet our Production Team Paul Guglielmino Staff Software Engineer (Sound Engineer) Sunny Hsiao Program Manager (Producer) Salim Virji Site Reliability Engineer | SRE Education Program Manager (Producer) Foreword The Google Prodcast Team has gone through quite a few iterations and hiatuses over the years, and many people have had a hand in its existence. For the longest time, a handful of SREs produced the Prodcast for the listening pleasure of the other engineers here at Google. The credit for a lot of the project really goes to John Reese, known around Google as JTR. The Prodcast was a project he kept alive as other team members came and went. Eventually, JTR decided to explore the world outside of Google and the Prodcast was left in an uncertain state. I had been a part of the team for a while, Viv had just joined the team, and the other member of the team had to step away due to other commitments. At this point, we decided to make a hard pivot. We decided that we wanted to make a podcast for more than just engineers at Google. We wanted to make something that would be of interest to folks across organizations and technical implementations. In his last act as part of the Prodcast, JTR put us in touch with Jennifer Petoff, Director of SRE Education, in order to have the support of the SRE organization behind us. With that, we turned to one of the most studied resources in SRE: the Google SRE Book. We didn't want to rehash what the book already discussed in detail; we might as well have just recorded an audiobook if that was our goal. Originally, we were aiming for something in the neighborhood of an update—a revision—to the SRE Book. What we ended up with was a series of conversations with domain experts at Google that often challenged the orthodoxy of the SRE Book, sometimes entirely reframing the topic, as is particularly the case with our episode on SLOs, one of my personal favorites. I found myself learning new things during every recording session, even though we had already met with our guests to map the episodes out! It was an absolute pleasure chatting with all our guests, to the point that we often continued talking after we finished recording. I am immensely grateful to all our guests for the time they contributed so that we may put all of this together for you. I hope you enjoy listening as much as we enjoyed recording. To the present and future reliability of you and your services, — MP English from the Prodcast Team Read more Acknowledgments This season is brought to you by hosts Jordan Greenberg, Steve McGhee, Florian Rathgeber, and Matt Siegler, with contributions from many SREs behind the scenes. The Prodcast is produced by Paul Guglielmino and Salim Virji. The Prodcast theme is Telebot, by Javi Beltran and Jordan Greenberg. In addition to our Prodcast guests, we acknowledge the contributions of MP English, Cara Pardo, Jennifer Petoff, John Reese, Viv, and Pamela Vong. Follow us About Google Google products Privacy Terms Help
2026-01-13T09:29:20
https://w3techs.com/technologies/cross/dns_server/site_element
Usage Survey of DNS Server Providers broken down by Site Elements advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Site Elements Usage of DNS server providers broken down by site elements Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by site elements. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 14.7% of all the websites that use CSS as site element.     Cloudflare 15.4% 14.7% 16.4% 15.9% 12.0% 12.9% 32.3% 36.8% 13.7% 42.5% 5.3% 4.4%     GoDaddy Group 10.1% 10.3% 10.6% 10.5% 12.4% 10.7% 10.0% 11.8% 15.2% 4.2% 13.3% 3.1%     Newfold Digital Group 4.0% 4.2% 4.3% 3.9% 4.1% 3.9% 1.8% 3.4% 2.8% 0.8% 2.7% 0.9%     W3Techs.com, 13 January 2026 Overall CSS Compression Default protocol https Cookies Default subdomain www HTTP/3 HTTP/2 HTTP Strict Transport Security IPv6 ETag QUIC Percentages of websites using various DNS server providers broken down by site elements More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:20
https://www.linkedin.com/products/leadsquared-converse/?trk=products_details_guest_other_products_by_org_section_product_link_result-card_full-click
LeadSquared Converse | LinkedIn Skip to main content LinkedIn LeadSquared in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in LeadSquared Converse Customer Relationship Management (CRM) Software by LeadSquared See who's skilled in this Add as skill Get started Report this product About Make conversations more organized for your team and more personal for your customers. Converse gets all your (WhatsApp, SMS, Chatbot) conversations centralized on one platform. Use converse to boost agent productivity and delight customers with faster, more contextual conversations. 1. Stop juggling between multiple platforms 2. Get complete context on every lead 3. Respond to customers. (Faster than your competition) 4. Get more out of your marketing campaigns 5. Converse for WhatsApp: Leverage the world’s biggest messaging platform for your business. - Manage all your customers, agents and conversations with one business number. - Route messages to the assigned lead owner. Automatically. - Send bulk marketing campaigns. Notify agents when a lead responds positively. - Give your agents access to a lead’s details and conversation history. - Simplify your verification/onboarding experience. Allow your customers to share documents and files over WhatsApp. This product is intended for Customer Relationship Management Specialist Customer Relationship Management Manager Sales Manager Admissions Specialist Information Technology Manager Marketing Specialist Sales Specialist Media Products media viewer No more previous content LeadSquared Converse What's coming in Testimonial No more next content Featured customers of LeadSquared Converse Practo Hospitals and Health Care 141,137 followers FIITJEE Education Administration Programs 98,237 followers Similar products Sales Cloud Sales Cloud Customer Relationship Management (CRM) Software Zoho CRM Zoho CRM Customer Relationship Management (CRM) Software Bigin by Zoho CRM Bigin by Zoho CRM Customer Relationship Management (CRM) Software Experian DataShare Experian DataShare Customer Relationship Management (CRM) Software Odoo CRM Odoo CRM Customer Relationship Management (CRM) Software Freshsales Freshsales Customer Relationship Management (CRM) Software Sign in to see more Show more Show less LeadSquared products Lead Management Lead Management Customer Relationship Management (CRM) Software LeadSquared CRM LeadSquared CRM Customer Relationship Management (CRM) Software LeadSquared Customer Portal LeadSquared Customer Portal Customer Relationship Management (CRM) Software LeadSquared Marketing Automation LeadSquared Marketing Automation Marketing Automation Software LeadSquared Mobile CRM LeadSquared Mobile CRM Customer Relationship Management (CRM) Software LeadSquared Sales Execution CRM LeadSquared Sales Execution CRM Customer Relationship Management (CRM) Software LeadSquared Sales Performance Suite LeadSquared Sales Performance Suite Customer Relationship Management (CRM) Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:20
https://w3techs.com/technologies/cross/dns_server/advertising
Usage Survey of DNS Server Providers broken down by Advertising Networks advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Advertising Networks Usage of DNS server providers broken down by advertising networks Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by advertising networks. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 17.7% of all the websites that use Google Ads as advertising network.     Cloudflare 15.4% 17.7% 16.7%     GoDaddy Group 10.1% 10.8% 9.4%     Newfold Digital Group 4.0% 4.2% 5.9%     W3Techs.com, 13 January 2026 Overall Google Ads Amazon Associates Percentages of websites using various DNS server providers broken down by advertising networks More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:21
https://www.linkedin.com/products/leadsquared-marketing-automation?similarProducts=true&trk=products_details_guest_similar_products_section_sign_in
LeadSquared Marketing Automation | LinkedIn Skip to main content LinkedIn LeadSquared in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in LeadSquared Marketing Automation Marketing Automation Software by LeadSquared See who's skilled in this Add as skill Learn more Report this product About LeadSquared is an end-to-end marketing automation software, helping 1000+ businesses worldwide push leads faster down the sales funnel and build meaningful prospect relationships. With LeadSquared, businesses can: • Reduce lead leakage to zero with landing pages, connectors, APIs and more • Understand intent with 360-degree user profile, behavior, activity + social tracking and more • Identify a user’s intent to buy with tracking and trigger engagement actions • Pre-build user behavior + engagement flows for important actions, like pricing page view • Send relevant content right when users want it to encourage sales actions • Trigger engagement across all channels and devices – emails, text messages, social, phone calls, portals and more • Get prescriptive insights across lead sources, engagement campaigns, user journeys and more • Increase CLV & reactivate dead leads via cross-sell signal capture and triggered campaigns • Connect all systems to a single front-end for all marketing. Media Products media viewer No more previous content Your Guide to Marketing Automation Key topics covered: 1. For sales teams - Learn how to automate daily processes like creating tasks, posting activities & assigning leads 2. For marketing teams - Learn how to automate daily tasks like sending campaigns, designing behavior-based workflows & creating marketing reports 3. Industry-specific best practices & use cases End-to-end Marketing Automation Software Make meaningful prospect conversation a standard across all channels No more next content Featured customers of LeadSquared Marketing Automation Practo Hospitals and Health Care 141,137 followers Universal Sompo General Insurance Co. Ltd. Insurance 52,656 followers Similar products Marketing Cloud Marketing Cloud Marketing Automation Software Zoho Campaigns Zoho Campaigns Marketing Automation Software Freshmarketer Freshmarketer Marketing Automation Software HCL Unica HCL Unica Marketing Automation Software Brevo (formerly Sendinblue) Brevo (formerly Sendinblue) Marketing Automation Software RD Station Marketing RD Station Marketing Marketing Automation Software Sign in to see more Show more Show less LeadSquared products Lead Management Lead Management Customer Relationship Management (CRM) Software LeadSquared Converse LeadSquared Converse Customer Relationship Management (CRM) Software LeadSquared CRM LeadSquared CRM Customer Relationship Management (CRM) Software LeadSquared Customer Portal LeadSquared Customer Portal Customer Relationship Management (CRM) Software LeadSquared Mobile CRM LeadSquared Mobile CRM Customer Relationship Management (CRM) Software LeadSquared Sales Execution CRM LeadSquared Sales Execution CRM Customer Relationship Management (CRM) Software LeadSquared Sales Performance Suite LeadSquared Sales Performance Suite Customer Relationship Management (CRM) Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:21
https://w3techs.com/technologies/cross/dns_server/proxy
Usage Survey of DNS Server Providers broken down by Reverse Proxy Services advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Reverse Proxies Usage of DNS server providers broken down by reverse proxy services Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by reverse proxy services. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 59.7% of all the websites that use Cloudflare as reverse proxy service.     Cloudflare 15.4% 59.7% 10.9% 10.6% 8.6% 2.2% 3.6%     GoDaddy Group 10.1% 13.9% 7.0% 27.1% 13.0% 0.9% 68.9%     Newfold Digital Group 4.0% 2.6% 1.7% 6.7% 3.4% 0.1% 4.7%     W3Techs.com, 13 January 2026 Overall Cloudflare Amazon CloudFront Fastly Akamai DDoS-Guard Sucuri Percentages of websites using various DNS server providers broken down by reverse proxy services More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:21
https://w3techs.com/technologies/details/dn-teamblue
Usage Statistics and Market Share of team.blue as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Subcategories History Technology Changes Market Top Site Usage Market Position Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > team.blue Usage statistics of team.blue as DNS server provider Request an extensive team.blue market report. Learn more These diagrams show the usage statistics of team.blue as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. team.blue is used as DNS server provider by 3.0% of all the websites. Subcategories of team.blue This diagram shows the percentages of websites using various subcategories of team.blue. How to read the diagram: Register.it is used by 13.2% of all the websites who use team.blue Register.it 13.2% WebSupport 9.8% TransIP 8.5% Natro 6.7% Loopia 6.6% Combell 6.5% SuperHosting.BG 4.9% Simply.com 4.6% Vimexx 4.6% Webempresa 4.4% Enartia Group 4.0% Names.co.uk 2.7% Amen 2.5% PTisp 1.9% Curanet 1.7% Nominalia 1.6% Dominios.pt 1.6% Magyar Hosting 1.6% Active 24 1.2% DanDomain 1.2% Keliweb 1.1% Proserve 1.0% Ticimax 1.0% LCN 0.9% Register365 0.9% Hostingpalvelu 0.8% Domainhotelli 0.8% Hosting Ireland 0.8% Easyhost 0.5% LetsHost 0.5% ScanNet 0.4% Raidboxes 0.4% Swizzonic 0.3% Planeetta 0.3% Wannafind 0.2% UKDedicated 0.2% Hypernode 0.1% catalyst2 0.1% Maxcluster 0.1% Simplyhosting less than 0.1% DDS less than 0.1% Simply Transit less than 0.1% Signet less than 0.1% Sfera less than 0.1% VDX less than 0.1% Webnode less than 0.1% W3Techs.com, 13 January 2026 Percentages of websites using various subcategories of team.blue Note: a website may use more than one subcategory of team.blue Historical trend This diagram shows the historical trend in the percentage of websites using team.blue. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of team.blue compared to all other DNS server providers in our team.blue market report . Market position This diagram shows the market position of team.blue in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using team.blue Spaggiari.eu Nirvam.it Windguru.cz Alo.rs Semana.es Gegen-hartz.de Dizy.com Unieuro.it Felizes.pt Yenibeygir.com Random selection of sites using team.blue Haticekoc.com Chernorizets.com Circusppc.com Bigbagseurope.es Koventinka.cz Sites using team.blue only recently Muziekweb.nl Aeroportodinapoli.it Berg.se Dogloversgold.eu Motorboot.com More examples of sites You can find more examples of sites using team.blue in our team.blue market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of team.blue with ArvanCloud and FirstVDS and Flexwebhosting . Free technology usage monitoring service Get a notification when a top site starts using team.blue. Share this page Technology Brief team.blue Category: DNS Server Providers team.blue own various web hosting and internet services brands, headquartered in Belgium. Website: team.blue Latest related posting   read all Web Technologies of the Year 2024 2 January 2025 We compiled the list of web technologies that saw the largest increase in usage in 2024. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:21
https://w3techs.com/technologies/pagespeed/dns_server/ttfb
Page Speed Report, fastest TTFB, for DNS Server Providers advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Technologies > DNS Servers > Page Speed DNS server providers ranked by average page speed Detailed statistics in our extensive DNS server providers market report. Learn more This table shows average page speeds per DNS server provider. We show two measurements, obtained from Google's Chrome User Experience Report (CrUX) . Time to First Byte (TTFB) measures the time it takes to establish the connection to the web server until content starts to be served, see TTFB definition . Largest Contentful Paint (LCP) measures the time starting from when a web page first starts loading up to the point when the largest image or text block becomes visible within the page, see LCB definition . We include DNS server providers, for which we have at least 100 measurements, and calculate the average of these sites. See technologies overview for further explanations on the methodologies used in the surveys. DNS Server Provider TTFB in s ▲ LCP in s ▲ Janela Digital 0.051 0.085 MakeShop Korea 0.057 0.032 Shoptet 0.069 0.044     More detailed statistics You can find page speed data for all 639 DNS server providers for which we have sufficient performance data in our DNS server providers market report and in our historical performance trends report . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:21
https://w3techs.com/technologies/cross/dns_server/tag_manager
Usage Survey of DNS Server Providers broken down by Tag Managers advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Tag Managers Usage of DNS server providers broken down by tag managers Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by tag managers. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 18.1% of all the websites that use Google Tag Manager as tag manager.     Cloudflare 15.4% 18.1%     GoDaddy Group 10.1% 10.8%     Newfold Digital Group 4.0% 4.1%     W3Techs.com, 13 January 2026 Overall Google Tag Manager Percentages of websites using various DNS server providers broken down by tag managers More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:21
https://www.linkedin.com/uas/login?fromSignIn=true&session_redirect=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Ftencentglobal&trk=top-card_ellipsis-menu-semaphore-sign-in-redirect&guestReportContentType=COMPANY&_f=guest-reporting
LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional))
2026-01-13T09:29:21
https://w3techs.com/technologies/cross/dns_server/javascript_library
Usage Survey of DNS Server Providers broken down by JavaScript Libraries advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by JavaScript Libraries Usage of DNS server providers broken down by JavaScript libraries Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by JavaScript libraries. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 14.9% of all the websites that use jQuery as JavaScript library.     Cloudflare 15.4% 14.9% 17.0% 16.0% 6.6% 13.1% 3.9% 19.6% 17.8% 31.6% 17.3% 11.9% 16.0%     GoDaddy Group 10.1% 10.0% 9.3% 8.5% 6.2% 10.2% 4.7% 9.0% 8.8% 10.3% 11.5% 11.2% 12.2%     Newfold Digital Group 4.0% 4.5% 4.6% 5.0% 2.3% 5.4% 1.2% 4.6% 4.2% 2.1% 4.4% 4.2% 5.9%     W3Techs.com, 13 January 2026 Overall jQuery Bootstrap Underscore React Modernizr Lodash Popper Moment.js Next.js GSAP UIkit Backbone Percentages of websites using various DNS server providers broken down by JavaScript libraries More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:21
https://www.chiark.greenend.org.uk/~sgtatham/putty/maillist.html
PuTTY Updates PuTTY Updates Home | FAQ | Feedback | Licence | Updates | Mirrors | Keys | Links | Team Download: Stable · Snapshot | Docs | Privacy | Changes | Wishlist If you would like to be told when new releases of PuTTY come out, you can subscribe to the PuTTY-announce mailing list. Do this by going to the list information and subscription page . Please do not request a subscription by mailing the PuTTY authors. Please do not send mail to PuTTY-announce! It is not a discussion list. It is not a place to report bugs to. It is a list of people who want to receive one e-mail, from the PuTTY developers, every time there is a new release. All posts from other people, even list members, will be rejected. If you want to contact the PuTTY team, see the Feedback page . Please do not subscribe an address to the PuTTY-announce list if it has a spam filter which requires the sender to confirm incoming messages. The PuTTY team does not have the time to reply to all such confirmation requests. If we receive confirmation messages in response to a release announcement, we will ignore them , and you won't receive the mail. It is your responsibility to ensure that an address subscribed to PuTTY-announce can receive mail from it without our personal attention. If you want to modify or cancel your subscription, you can do all of this from the mailing list information page . If you want to comment on this web site, see the Feedback page . (last modified on Sat Feb 8 11:06:01 2025 )
2026-01-13T09:29:21
https://w3techs.com/technologies/cross/dns_server/top_level_domain
Usage Survey of DNS Server Providers broken down by Top Level Domains advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Top Level Domains Usage of DNS server providers broken down by top level domains Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by top level domains. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 16.7% of all the websites that use .com as top level domain.     Cloudflare 15.4% 16.7% 17.0% 3.6% 19.3% 6.6% 16.6% 20.3% 1.6% 6.7% 6.6% 10.1% 9.4% 19.3% 13.1% 15.5%     GoDaddy Group 10.1% 15.4% 15.9% 8.1% 1.6% 0.0% 18.3% 8.5% 0.2% 1.0% 0.6% 0.8% 0.3% 12.1% 22.8% 23.5%     Newfold Digital Group 4.0% 5.6% 9.3% 0.1% 10.6% 0.0% 1.4% 4.4% 0.1% 0.1% 0.1% 0.1% 0.1% 10.3% 12.3% 4.9%     W3Techs.com, 13 January 2026 Overall .com .org .de .br .ru .uk .net .jp .it .fr .nl .pl .au .in .ca Percentages of websites using various DNS server providers broken down by top level domains More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:21
https://w3techs.com/technologies/cross/dns_server/content_language
Usage Survey of DNS Server Providers broken down by Content Languages advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Content Languages Usage of DNS server providers broken down by content languages Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by content languages. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 18.1% of all the websites that use English as content language.     Cloudflare 15.4% 18.1% 12.8% 4.8% 2.5% 9.2% 22.2% 10.0% 7.6% 10.0% 10.5% 25.4% 26.0% 9.4% 26.5% 5.5%     GoDaddy Group 10.1% 18.0% 6.9% 7.0% 0.2% 2.2% 2.6% 0.4% 1.3% 0.9% 0.4% 3.1% 7.0% 0.1% 1.6% 0.3%     Newfold Digital Group 4.0% 6.5% 3.8% 0.1% 0.1% 0.5% 10.4% 0.1% 0.3% 0.1% 0.1% 0.4% 1.8% 0.1% 0.2% 0.1%     W3Techs.com, 13 January 2026 Overall English Spanish German Japanese French Portuguese Russian Italian Dutch, Flemish Polish Turkish Chinese Persian Vietnamese Czech Percentages of websites using various DNS server providers broken down by content languages More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:21
https://w3techs.com/technologies/cross/dns_server/data_center
Usage Survey of DNS Server Providers broken down by Data Center Providers advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Data Centers Usage of DNS server providers broken down by data center providers Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by data center providers. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 14.3% of all the websites that use Amazon as data center provider.     Cloudflare 15.4% 14.3% 6.2% 11.1% 4.7% 1.2% 8.8% 3.3% 3.5% 2.4% 2.3% 18.3% 0.3% 0.3% 8.9% 0.5% 1.0%     GoDaddy Group 10.1% 22.8% 9.3% 2.9% 2.5% 1.7% 3.4% 19.9% 3.4% 1.1% 84.7% 18.0% 0.0% 0.1% 17.4% 0.1% 0.2%     Newfold Digital Group 4.0% 3.1% 2.2% 0.6% 0.5% 0.2% 0.5% 5.4% 80.6% 0.1% 2.4% 3.7% 0.0% 0.0% 3.8% 0.0% 0.0%     W3Techs.com, 13 January 2026 Overall Amazon Google Hostinger OVH United Internet Hetzner Squarespace Newfold Digital Group team.blue GoDaddy Group DigitalOcean XServer GMO Internet Group Microsoft Sakura Aruba Group Percentages of websites using various DNS server providers broken down by data center providers More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:21
https://w3techs.com/technologies/details/dn-groupibm
Usage Statistics and Market Share of IBM Group as DNS Server Provider, January 2026 advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Trends Subcategories History Technology Changes Market Top Site Usage Market Position Comparison Compare with other Breakdown Ranking Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Segmentation Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages see FAQ for explanations on advanced reports Technologies > DNS Servers > IBM Group Usage statistics of IBM Group as DNS server provider Request an extensive IBM Group market report. Learn more These diagrams show the usage statistics of IBM Group as DNS server provider. See technologies overview for explanations on the methodologies used in the surveys. Our reports are updated daily. IBM Group is used as DNS server provider by 1.3% of all the websites. Subcategories of IBM Group This diagram shows the percentages of websites using various subcategories of IBM Group. How to read the diagram: NS1 is used by 97.6% of all the websites who use IBM Group NS1 97.6% IBM 2.4% W3Techs.com, 13 January 2026 Percentages of websites using various subcategories of IBM Group Note: a website may use more than one subcategory of IBM Group Historical trend This diagram shows the historical trend in the percentage of websites using IBM Group. Our dedicated trend survey shows more DNS servers usage trends . You can find growth rates of IBM Group compared to all other DNS server providers in our IBM Group market report . Market position This diagram shows the market position of IBM Group in terms of popularity and traffic compared to the most popular DNS server providers. Our dedicated market survey shows more DNS servers market data . Popular sites using IBM Group Linkedin.com Github.com Bing.com Pinterest.com Msn.com Spotify.com Roblox.com Nytimes.com Theguardian.com Ebay.com Random selection of sites using IBM Group Joleevers.com Iop.kiev.ua Emberssa.com Embalagemnet.com.br Akar.store Sites using IBM Group only recently Montblanc.com Myglo.com Lottohelden.de Elit.ro Xsplit.com More examples of sites You can find more examples of sites using IBM Group in our IBM Group market report , or you can request a custom web technology market report . Technology comparisons Our visitors often compare the usage statistics of IBM Group with All-inkl.com and Alfahosting and Sprinthost . Free technology usage monitoring service Get a notification when a top site starts using IBM Group. Share this page Technology Brief IBM Group Category: DNS Server Providers IBM is a multinational IT company headquartered in USA. Website: ibm.com advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:21
https://w3techs.com/technologies/cross/dns_server/traffic_analysis
Usage Survey of DNS Server Providers broken down by Traffic Analysis Tools advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Traffic Analysis Tools Usage of DNS server providers broken down by traffic analysis tools Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by traffic analysis tools. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 17.1% of all the websites that use Google Analytics as traffic analysis tool.     Cloudflare 15.4% 17.1% 20.3% 12.7% 9.3% 23.1% 24.1% 13.5% 77.5% 15.5% 24.8% 7.6% 20.1% 23.4% 23.6% 25.9% 11.3%     GoDaddy Group 10.1% 10.8% 11.3% 6.6% 0.6% 12.9% 11.7% 10.5% 7.5% 5.9% 16.2% 21.1% 14.3% 10.7% 15.8% 18.5% 4.0%     Newfold Digital Group 4.0% 4.5% 4.4% 9.7% 0.2% 3.4% 3.7% 7.9% 1.4% 2.0% 3.5% 18.4% 4.4% 2.5% 4.5% 5.0% 2.3%     W3Techs.com, 13 January 2026 Overall Google Analytics Meta Pixel WordPress Jetpack Yandex.Metrica Microsoft Clarity Hotjar MonsterInsights Cloudflare Web Analytics Matomo Microsoft UET Snowplow New Relic TikTok Pixel LinkedIn Insight Tag HubSpot WP Statistics Percentages of websites using various DNS server providers broken down by traffic analysis tools More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:21
https://w3techs.com/technologies/cross/dns_server/email_server
Usage Survey of DNS Server Providers broken down by Email Server Providers advertise here provided by Q-Success Home Technologies Reports API Sites Quality Users Blog Forum FAQ Search Featured products and services advertise here Technologies Content Management Server-side Languages Client-side Languages JavaScript Libraries CSS Frameworks Web Servers Web Panels Operating Systems Web Hosting Data Centers Reverse Proxies DNS Servers Email Servers SSL Certificate Authorities Content Delivery Traffic Analysis Tools Advertising Networks Tag Managers Social Widgets Site Elements Structured Data Markup Languages Character Encodings Image File Formats Top Level Domains Server Locations Content Languages Related Reports Transposition Technologies > DNS Servers > by Email Servers Usage of DNS server providers broken down by email server providers Detailed statistics in our extensive DNS server providers market report. Learn more This diagram shows the percentages of websites using various DNS server providers broken down by email server providers. Cross-technology reports only include technologies with more than 1% usage to ensure statistical significance of the results. See technologies overview for explanations on the methodologies used in the surveys. How to read the diagram: Cloudflare is used by 15.4% of all the websites. Cloudflare is used by 18.6% of all the websites that use Gmail as email server provider.     Cloudflare 15.4% 18.6% 15.3% 3.1% 8.5% 4.9% 8.8% 10.2% 24.0% 5.7% 28.5% 0.6% 0.6% 6.5% 15.5% 2.6% 12.4% 2.7% 4.8% 0.8%     GoDaddy Group 10.1% 16.3% 25.5% 0.2% 0.5% 0.1% 0.9% 73.2% 0.5% 0.1% 14.6% 0.0% 0.0% 0.6% 0.9% 0.0% 44.7% 0.1% 0.9% 0.0%     Newfold Digital Group 4.0% 5.9% 4.4% 0.1% 0.1% 0.0% 82.0% 0.7% 0.1% 0.0% 3.5% 0.0% 0.0% 0.1% 0.2% 0.0% 4.6% 0.0% 0.4% 0.0%     W3Techs.com, 13 January 2026 Overall Gmail Microsoft United Internet Hostinger team.blue Newfold Digital Group GoDaddy Group Namecheap OVH Zoho XServer GMO Internet Group SiteGround Yandex Aruba Group Proofpoint Group Group.one OpenSRS Sakura Percentages of websites using various DNS server providers broken down by email server providers More detailed statistics You can find complete breakdown reports of 726 DNS server providers in our DNS server providers market reports . Share this page Technology Brief DNS Server Providers A DNS (domain name system) server manages internet domain names and their associated records such as IP addresses. We group brands, that are owned by the same entity, to show the real relevant market shares. Latest related posting   read all Web Technologies of the Year 2025 5 January 2026 We compiled the list of web technologies that saw the largest increase in usage in 2025. » more advertise here About Us Disclaimer Terms of Use Privacy Policy Advertising Contact W3Techs on   LinkedIn Mastodon Bluesky Copyright © 2009-2026 Q-Success
2026-01-13T09:29:21